|
|
|
AGENTIC AI Engineering with GEN AI Foundations Course Details |
|
Subcribe and Access : 5200+ FREE Videos and 21+ Subjects Like CRT, SoftSkills, JAVA, Hadoop, Microsoft .NET, Testin5g Tools etc..
Batch
Date: May
6th @7:30AM
Faculty: Mr. Vasanth (8+ Yrs of Exp,..)
Duration: 3 Months
Venue
:
DURGA SOFTWARE SOLUTIONS,
Flat No : 202,
2nd Floor,
HUDA Maitrivanam,
Ameerpet, Hyderabad - 500038
Ph.No: +91 - 8885252627, 9246212143, 80 96 96 96 96
Syllabus:
AGENTIC AI Engineering
with GEN AI Foundations
Module 1: Generative AI & LLM Foundations
Objective: Understand how Large Language Models work, call LLM APIs, run open-source models locally, and build semantic search systems.
1.1 Introduction to Generative AI
- What is Generative AI? Generative vs Discriminative models
- AI landscape 2026: OpenAI, Anthropic, Google DeepMind, Meta, Mistral
- Types of AI: Narrow AI, General AI, Superintelligent AI
- Why Generative AI matters — real-world applications and business value
- AI vs Machine Learning vs Deep Learning vs Generative AI
1.2 How LLMs Work
- Transformer architecture and attention mechanism — plain English explanation
- Tokens, context window, temperature, top-k, top-p sampling
- Tokenization and embeddings — foundational overview
- LLM families: GPT-4o, Claude, Gemini, LLaMA 3, Mistral, DeepSeek
- Open-source vs proprietary models — model selection framework
1.3 LLM APIs & Prompt Engineering
- Calling OpenAI and Anthropic APIs: system, user, assistant messages
- Multi-turn conversation state management in Python
- Prompt Engineering techniques:
- Zero-shot, few-shot, and chain-of-thought (CoT) prompting
- ReAct prompting: Reason + Act pattern
- Tree-of-Thought (ToT), persona-based, constraint-based prompting
- Meta-prompting, negative prompting
- Context Engineering: system prompt design, goal framing, constraint embedding
- Evaluating LLM outputs — quality metrics and best practices
1.4 Open-Source LLMs & Local Deployment
- Open-source LLM families: LLaMA 3, Mistral, DeepSeek, Phi
- Running models locally with Ollama — privacy and cost benefits
- Comparing open-source and proprietary models for different use cases
1.5 Embeddings & Vector Databases
- What are embeddings? Semantic similarity and cosine distance
- Embedding models: OpenAI text-embedding-3, sentence-transformers
- Vector databases:
- ChromaDB — local vector store for development
- FAISS — Facebook AI Similarity Search
- Pinecone — managed vector database for production
- Indexing, similarity search, and metadata filtering
- Hands-on project: semantic document search engine
- Hands-on Project: Multi-turn chatbot using LLM API + Semantic document search engine
Module 2: LLM Automation, Chains & Retrieval-Augmented Generation (RAG)
Objective: Build end-to-end LLM automation pipelines and production-grade RAG systems with full evaluation.
2.1 LangChain Architecture & LCEL
- LangChain overview: LLMs, Chains, Prompts, Memory, Agents, Tools
- LangChain Expression Language (LCEL): the modern pipeline syntax
- Chain types:
- LLMChain, SimpleSequentialChain, SequentialChain
- ConversationChain, RouterChain
- Conditional branching and dynamic routing pipelines
- Prompt Templates: Standard, Few-shot, Zero-shot, and Custom templates
- Document Loaders and Text Splitters: preparing external knowledge for LLMs
2.2 Agent Cognitive Layers & Memory
- Agent cognitive architecture: Perception → Memory → Decision → Action
- Pydantic data contracts between cognitive modules
- Memory types:
- ConversationBufferMemory
- ConversationSummaryMemory
- ConversationBufferWindowMemory
- VectorStoreRetrieverMemory
- Output parsers: Pydantic, JSON mode, structured schema-enforced outputs
- Python logging for agents: DEBUG, INFO, WARNING, ERROR, CRITICAL levels
2.3 Retrieval-Augmented Generation (RAG)
- What is RAG and why it is needed — the hallucination problem
- RAG architecture: Naive RAG → Advanced RAG → Agentic RAG
- Components of RAG:
- Document ingestion: PDF, web, CSV, Notion loaders
- Chunking strategies: fixed-size, semantic, recursive, parent-child
- Vector Database (FAISS, Pinecone, ChromaDB)
- LLM integration for answering queries
- Retrieval techniques:
- Maximum Marginal Relevance (MMR)
- Multi-query retriever
- Contextual compression
- HyDE — Hypothetical Document Embeddings
- ReRanking with Cohere cross-encoders
- Hybrid Planning: Heuristics + LLM decision matrix for reliable retrieval
- Function calling and Tool use: OpenAI tools schema, Anthropic tool_use
- Applications of RAG: chatbots, search, knowledge assistants, enterprise Q&A;
- Workflow of RAG — step-by-step explanation
2.4 RAG Evaluation & Observability
- RAGAS evaluation framework:
- Answer Relevance
- Context Precision
- Faithfulness
- Answer Correctness
- LangSmith: end-to-end pipeline tracing, cost tracking, latency profiling
- Evaluation-driven iteration: diagnosing and fixing failing RAG pipelines
- Hands-on Project: Full RAG chatbot with source citations, RAGAS evaluation scores, and
LangSmith observability
Module 2.5: Knowledge Graphs & GraphRAG (Neo4j)
Objective: Understand enterprise knowledge graph design with Neo4j and build GraphRAG systems that combine vector search with graph traversal for complex multi-hop reasoning.
2.5.1 Knowledge Graph Fundamentals (Neo4j)
- Why knowledge graphs? When vector search alone fails
- Entity-relation modeling: nodes, edges, properties
- Ontology-driven design: defining schema for AI systems
- Provenance and audit trails in graphs
- Schema design and constraints for enterprise AI
- Neo4j: Cypher query language basics
- Structured reasoning using relationships
- Explainability via graph traces — show the reasoning path
- Governance-ready data representations
2.5.2 GraphRAG — Hybrid Retrieval with Neo4j
- GraphRAG vs traditional RAG: when to use which
- Graph traversal retrieval: multi-hop reasoning
- Example: "Who approved this contract, what team do they manage, and what policy governs it?"
- Vector search cannot answer this — graph traversal can
- Hybrid retrieval: vector + full-text + graph traversal combined
- "Show evidence + show path" response pattern — explainable answers
- Relationship-aware retrieval for complex enterprise questions
- Ingesting documents → extracting entities → building Neo4j provenance graph
- Evaluation: groundedness, citation coverage, retrieval recall
- Hands-on Project: GraphRAG agent — ingest documents, extract entities to Neo4j, hybrid retrieval with explainable evidence + graph path
Module 3: Agentic AI Foundations & Design Patterns
Objective: Understand the core building blocks and architectural patterns of autonomous AI agents and build single-agent systems using multiple frameworks.
3.1 Introduction to Agentic AI
- What is Agentic AI? Agent vs Chain — the key distinction
- Core AI agent building blocks: Perception, Cognition/Reasoning, Planning, Action, Memory, Adaptability & Learning
- Agent anatomy: Perception → Reasoning → Action loop
- LLM as the brain, tools as hands, memory as context
- Agent vs traditional software: autonomous decision-making explained
3.2 Agentic AI Architectures & Design Patterns
- Architectural concepts: single-agent, multi-agent, hierarchical, swarm
- Key Design Patterns:
- ReAct — Reason + Act pattern: Thought → Action →Observation loop
- Reflection — agents that critique and self-improve their own outputs
- Tool Use — structured tool calling, error handling, max-iteration safety
- Planning — Plan-and-Execute, Tree-of-Thought, ReWOO
- Human-in-the-Loop (HITL): when and how to add human oversight
- Hybrid planning: combining rule-based heuristics with LLM reasoning
3.3 Foundational Frameworks & Technologies
LangChain and LangGraph
- ReAct agents with LangChain AgentExecutor
- Tool selection logic and real-world tools: Tavily web search, calculator, database
- LangGraph: stateful workflows with graph nodes, edges, and TypedDict state
- DAG execution principles: topological sort, fallback nodes, parallel variants
- Human-in-the-loop: interrupt_before, checkpointers (SQLite, Postgres), workflow resumption
OpenAI Agents SDK
- Agents, Handoffs, Guardrails, Tracing
- Multi-step agents with tool registration and streaming
- Handoff patterns: routing between specialised agents
- Comparison: LangChain AgentExecutor vs OpenAI Agents SDK
3.4 Practical Agent Development & Deployment
- Building agents with Python
- Tools and APIs integration
- Context Engineering for agents: system prompt design and goal framing
- Agent reliability: max iterations, fallback strategies, safe termination
- Hands-on Projects: Research Agent (ReAct + OpenAI SDK) and Email-Draft Agent with human approval gate (LangGraph)
Module 4: Multi-Agent Systems, A2A Protocol & Frameworks
Objective: Build production-grade multi-agent systems using CrewAI, AutoGen, and n8n. Implement A2A protocol for cross-framework agent interoperability.
4.1 Multi-Agent Systems
- Why multiple agents? Specialisation, parallelism, and fault isolation
- Communication patterns: broadcast, blackboard, supervisor, swarm
- Orchestrator vs subagent roles and responsibilities
- Inter-agent memory sharing and context passing
CrewAI & Multi-Agent Systems
- CrewAI architecture: Agents, Tasks, Tools, Crews
- Process types: sequential, hierarchical, parallel
- Designing agent roles, goals, and backstories
- CrewAI Flows: event-driven orchestration and multi-crew state management
- Hands-on projects:
- 3-agent Content Pipeline: Researcher + Writer + Editor
- Stock Picker Agent: research, analyse, and recommend investments
4.2 AutoGen — Conversational Multi-Agent & Code Agents
- AutoGen architecture: ConversableAgent, AssistantAgent, UserProxy
- GroupChat and GroupChatManager for multi-agent conversations
- Code execution agents: write, run, and debug code autonomously in Docker sandbox
- Agent Factory Pattern: reusable, scalable agent blueprints for any domain
- Specialised agents:
- Browser Automation Agent using Playwright
- Database Agent for data querying and analysis
- API Testing Agent
- Hands-on project: 4-agent Engineering Team — PM + Developer + Tester + Reviewer
4.3 A2A Protocol — Agent-to-Agent Communication
- What is the A2A Protocol? Google → Linux Foundation open standard
- A2A vs MCP: MCP = agent-to-tool communication, A2A = agent-to-agent communication
- A2A architecture:
- Agent Cards: capability discovery via JSON at well-known URLs
- Client-server model: A2A client (delegating agent) and A2A server
(remote agent)
- Task lifecycle: states, progress tracking, result artifacts
- Authentication and secure inter-agent communication
- A2A ecosystem: 150+ enterprise partners including Salesforce, SAP, ServiceNow, LangChain
- Building A2A-compliant agents with CrewAI, LangGraph, and Google ADK
- Hands-on: two agents on different frameworks communicating via A2A protocol
4.4 No-Code/Low-Code Agent Development
n8n — Visual Agentic Workflow Builder
- What is n8n? Visual workflow builder with AI nodes and triggers
- Integrations: Gmail, Google Sheets, Slack, Google Calendar, webhooks
- AI nodes: LLM calls, classification, summarisation, response generation
- No-code agent development: build workflows without writing Python
- Hands-on: email triage workflow — classify → respond → log in spreadsheet
Cloud deployment of n8n workflows
Module 5 : Responsible AI & Evaluation
Objective: Understand ethical principles and safety frameworks for AI systems. Implement observability and evaluation pipelines. Apply technical guardrails in production agents.
5.1 Responsible AI and Evaluation
- Observability and evaluation:
- RAGAS deep evaluation: Answer Relevance, Context Precision, Faithfulness, Answer Correctness
- LangSmith end-to-end agent tracing and cost tracking
- AgentOps: cost dashboard, token budget tracking, latency profiling
- Prompt versioning and registry management
- Evaluation-driven iteration: diagnosing and fixing agent failures
- Ethical considerations and risk mitigation:
- Bias and fairness in LLM outputs: detection and mitigation
- Transparency, explainability, and accountability frameworks
- EU AI Act: risk tiers awareness
(prohibited, high-risk, limited-risk, minimal-risk)
- Who is responsible when an agent fails? Accountability in enterprise AI
5.2 Security Threat Modeling for AI Agents
- Threat model for production agents: the 4 main attack surfaces
- Prompt injection: malicious instructions injected via user input or retrieved documents
- Indirect prompt injection: attacker-controlled content in the agent's context window
- Unsafe tool calls: agent tricked into executing destructive or unintended actions
- Data exfiltration: agent leaking sensitive retrieved content to untrusted endpoints
- Architectural defence patterns:
- Input sanitisation and allowlist-based tool access (read-only by default)
- Views-only execution patterns: never query base tables or execute write operations without approval
- Human approval gates: approve / edit / reject before agent executes high-risk actions
- Least-privilege tool design: agents get minimum permissions required
- Auditable tool execution: every tool call logged with inputs, outputs, timestamps
- Prompt injection: definition, real attack examples, and step-by-step defence strategies
- Adversarial prompting and jailbreak attempts — patterns and mitigations
- Security testing: red-teaming your agents before production release
5.3 Technical Guardrails Implementation
- Guardrails AI: input and output validation, topic rails
- NeMo Guardrails: conversation flows, topical rails, safety rails
- PII (Personally Identifiable Information) detection and masking
- Hallucination detection and factual grounding
- OpenAI Agents SDK guardrails: built-in safety parameters
- From ethics to code: translating responsible AI principles into guardrail design
Module 6: Production Deployment & MLOps
Objective: Instrument AI agent systems for production, deploy to live endpoints, and understand the MLOps landscape.
6.1 Practical Agent Deployment
- Building agents with Python, Tools, and APIs
- FastAPI: wrapping agents as REST API endpoints
- Docker: containerising agent applications
- Environment management: .env files, secrets, API key security
- Cloud deployment:
- HuggingFace Spaces — free, portfolio-ready deployment
- Render / Railway — simple backend deployment
- AWS EC2 + S3 — cloud deployment basics
- Streamlit and Gradio: building agent UIs for demos and production
6.2 Model Context Protocol (MCP)
- What is MCP? Anthropic's open standard for agent-to-tool communication
- MCP architecture: MCP server, MCP client, tool schema standardisation
- Building an MCP server hands-on: exposing tools via MCP
- MCP + A2A together: complete protocol stack for enterprise agents
6.3 Cost Optimisation & MLOps Awareness
- Semantic caching with GPTCache — reduce cost and latency
- LLM routing: strong model vs fast model based on query complexity
- CI/CD for AI: conceptual overview with a GitHub Actions example
- A/B testing agents: comparing prompt and model versions
- Drift detection: monitoring output quality over time
- What comes after this course: MLOps Engineering roadmap
Module 7: Capstone Projects & Future of Agentic AI
Objective: Build and present an end-to-end production-grade AI system. Understand where the field is heading and plan your career roadmap.
7.1 Capstone Projects
- Students choose one capstone project from five options:
- Intelligent Document Assistant: multi-format RAG with Streamlit UI, citations, and ReAct agent
- Autonomous Research Reporter: CrewAI multi-agent — Researcher + Analyst + Writer
- AI Workflow Automation Bot: LangGraph agent with human approvals, MCP + A2A
- 4-Agent Engineering Team: AutoGen crew — PM + Developer + Tester + Reviewer
- Domain Expert Agent: RAG + tool agent in HR, finance, legal, or healthcare with guardrails
- Deliverables: working demo at live URL, RAGAS scores, architecture doc, GitHub repo, 5-min presentation
7.2 Future of Agentic AI & AGI
- Where Agentic AI is heading: reactive ® proactive ® autonomous organisations
- The evolving agent protocol stack:
- MCP: agent-to-tool communication standard (Anthropic, 2024)
- A2A: agent-to-agent communication standard
(Google/Linux Foundation, 2025)
- What comes next: emerging protocols and standards
- AGI timeline: honest framing, current state, and what it means for practitioners
- Career pathways in Agentic AI Engineering:
- AI Engineer
- MLOps / AI DevOps Engineer
- AI Product Manager
- AI Solutions Architect
- Skills that will remain relevant vs what will be automated away
- Course 2 roadmap: Agentic AI in Production & MLOps Engineering
|
|
| |
|
|
|