|
|
|
GENERATIVE AI + AGENTIC AI
Course Details |
|
Subscribe and Access : 5200+ FREE Videos and 21+ Subjects Like CRT, SoftSkills, JAVA, Hadoop, Microsoft .NET, Testing Tools etc..
Batch
Date: Oct 6th @9:00PM
Faculty: Mr. Naveen Mourya (9+ Yrs of Exp,..)
Duration: 3.5 Months
Venue
:
DURGA SOFTWARE SOLUTIONS,
Flat No : 202,
2nd Floor,
HUDA Maitrivanam,
Ameerpet, Hyderabad - 500038
Ph.No: +91 - 8885252627, 9246212143, 80 96 96 96 96
Syllabus:
GENERATIVE AI + AGENTIC AI
Module 1: Python, Maths & Statistics Fundamentals
Objective: Build strong Programming, Maths & Statistical fundamentals for AI and data processing.
1.1 Python Basics:
- What is Python and why is it popular in AI?
- Identifiers, Key Words, Basic Data types: (Int , Float, String, Complex & Boolean)
- Fundamental Data types: [List],(tuple),{set} & {dict}, Basic Syntax, Advantages.
- Operators, Operator precedence & Expressions
1.2 Control Flow:
- What are Control Flow Statements in Python? Advantages & Basic Syntax
- Conditional Statements ( if, if-elif, if-else & if-elif-else)
- Transfer Statements (break, continue & pass)
- Iterative Statements (for & while)
1.3 Functions & Modules:
- Functions: In built & User Defined Functions
- Parameters, Return Statements, Types of arguments & Variables
- Recursive/Nested/lambda & Syntax/ filter/map() with lambda & without lambda, reduce()
- Function Aliasing, import concept and Function vs Module vs Library
- Standard Modules: datetime, os, math, random, re, json, requests & use cases
1.4 File Handling & Exception Handling:
- What is File Handling: Importance in AI
- Types of Files: [Text Files, Binary Files], Opening & Closing a File, Reading Data from text files, Writing Data to text files.
- The with statement - (The seek() and tell() methods:)
- Handling csv files: [Writing data to csv file, Reading Data from csv file]
- Handling Json Files:[Writing data to Json file, Reading Data from Json file]
- Exception Handling: [What is Exception , Default Exception Handling in Python]
- REST APIs calling using Json, Import & Requests modules
1.5 Libraries:
NumPy (for tensors, embeddings, vector math)
- NumPy Basics: What is NumPy and why is it important in ML/AI, NumPy installation
- Arrays: 1D, 2D, and higher dimensions, Applications
- Difference between Python list and NumPy array
- Numpy Advanced: Array indexing and slicing, Mathematical operations, Reshape, flatten, Broadcasting
- Useful functions: arange(), linspace(), eye(), ones(), zeros()
Pandas (for cleaning and preparing tabular/nested data)
- Pandas Basics: What is Pandas and use cases
- Series and DataFrames, Creating DataFrames from dict/list/CSV, Viewing data: head(), tail(), info(), describe()
- Pandas Advanced: Indexing, slicing, filtering, Adding/deleting columns
- Aggregations: groupby(), sum(), mean()
- Handling missing values: isna(), fillna(), replace()
Matplotlib (for visualizing model performance & embeddings)
- Matplotlib Basics: Introduction to Matplotlib, pyplot and plotting syntax
- Line plots, bar plots, scatter plots with Titles, labels, legends
- Matplotlib Advanced: Subplots, Histograms, Pie charts with Styling: colors, markers, line types
1.6 Linear Algebra
- Vectors, Matrices, Matrix Operations
- Eigen Vectors & Eigenvalues
1.7 Probability & Statistics
- Probability, Conditional Probability & Distributions
- Statistical Measures (Z score, Skewness, Kurtosis, Geometric Distribution)
- Bias, Variance, Standard Deviation & Covariance
- Population, Sample, Data Types, Sampling Methods & Variables
- Measure of Central Tendency, Symmetry, Spread & Variability
- Hypothesis Testing (Null & Alternative Hypothesis, Type-I & II Errors)
Module 2: Introduction to Artificial Intelligence
Objective: Understand the foundations of AI, its history, and applications.
- What is Artificial Intelligence?
- History and Evolution of AI
- Types of AI: Narrow AI, General AI, and Superintelligent AI
- Applications of AI in Real World
- AI vs Machine Learning vs Deep Learning vs Generative AI
Module 3: Machine Learning Foundations
Objective: Build a strong foundation in ML concepts and techniques.
- Introduction to Machine Learning
- Types of Machine Learning: Supervised, Unsupervised, Reinforcement Learning
- Data Preprocessing: Cleaning, Normalization, Feature Engineering
- Train-Test Split, Cross-Validation
- Evaluation Metrics: Accuracy, Precision, Recall, F1-score, ROC-AUC
- ML Algorithms Overview:
- Linear Regression, Logistic Regression
- Decision Trees, Random Forests, Gradient Boosting
- KNN, Naive Bayes, SVM
- Clustering: K-means, Hierarchical
- Bias-Variance Tradeoff
Module 4: Deep Learning & Neural Networks
Objective: Build Deep Learning foundations, understand architectures, and apply them using TensorFlow and PyTorch.
4.1 Introduction & Foundations
- Introduction to Deep Learning
- History of Deep Learning
- Applications of Deep Learning
4.2 Neural Networks Basics
- Perceptron: Simple Neural Network Explained
- Activation Functions: Sigmoid, Tanh, ReLU, Leaky ReLU, Softmax
- Loss Functions:
- Classification: Cross-Entropy Loss, Hinge Loss
- Regression: Mean Squared Error (MSE), Mean Absolute Error (MAE), Huber Loss
4.3 Training Fundamentals
- Gradient Descent and Types: Batch, Stochastic, Mini-Batch
- Optimizers: SGD, Adam, RMSProp
- Parameters vs Hyperparameter
4.4 Frameworks & Practice
- Introduction to TensorFlow and PyTorch
- Hands-on exercises with both frameworks
4.5 Regularization & Model Generalization
- Concept of Underfitting and Overfitting
- Regularization Techniques: Dropout, Early Stopping, Batch Normalization, L1/L2 Regularization
4.6 Hardware for Deep Learning
- Introduction to GPUs
- Types of GPUs and their importance in Deep Learning
4.7 Projects
- Five practical Deep Learning projects covering classification, regression, image data, and text data
Module 5: Natural Language Processing (NLP)
Objective: Enable machines to understand and Process Human Language
5.1 Introduction to NLP
- What is NLP and why is it important?
- History of NLP and real-world applications
5.2 Text Pre-processing
- Lowercasing
- Punctuation Removal
- Stopword Removal
- Stemming
- Lemmatization
- POS Tagging (Parts of Speech Tagging)
- Named Entity Recognition (NER)
5.3 Text Vectorization
- Bag of Words (BoW)
- N-Grams
- TF-IDF (Term Frequency – Inverse Document Frequency)
- One-Hot Encoding
5.4 Word Embeddings
5.5 Sequence-to-Sequence Models
- RNN (Recurrent Neural Networks)
- LSTM (Long Short-Term Memory)
- GRU (Gated Recurrent Unit)
- Encoder-Decoder Architecture explained
- Applications of Seq2Seq models (translation, summarization, chatbots, etc.)
- (Mini project and Exercises on this topic are by default covered)
Module 6: Transformers & Modern NLP
- Introduction to Transformers
- Attention Mechanism
- Self-Attention Explained
- Transformer Architecture (Encoder, Decoder)
- BERT, GPT, T5 and other Transformer Models
- Hugging Face Transformers Library
- Fine-tuning Pre-trained Models
- Applications of Transformers in NLP
Module 7: Generative AI & Large Language Models (LLMs)
Objective: Build strong foundations in Generative AI with a focus on text-based LLMs.
7.1 Introduction to Generative AI
- What is Generative AI?
- Why Generative AI matters in text-based applications
- Generative Models vs Discriminative Models
7.2 Large Language Models (LLMs)
- What is a Large Language Model?
- How LLMs are trained and work at a high level
- Categorization of LLMs (Open-source vs Proprietary, General-purpose vs Domain-specific, etc.)
- Popular LLM families: GPT, LLaMA, Falcon, Mistral, Claude, Gemini, etc.
- Companies providing LLMs (OpenAI, Meta, Anthropic, Google DeepMind, Mistral, Cohere, etc.)
7.3 Applications of Text-based Generative AI
- Chatbots & Conversational AI
- Text Summarization
- Text Generation (content creation, code generation, etc.)
- Q&A Systems
7.4 Practical Understanding
- Introduction to Tokenization and Embeddings (basic overview)
Module 8: Building Applications with LangChain & RAG
Objective: Learn how to build practical applications with LLMs using LangChain and Retrieval-Augmented Generation (RAG).
8.1 Creating Applications with LLMs
- How LLM-based applications are structured
- Connecting LLMs with external data and tools
- Designing interactive applications with LLMs
8.2 Introduction to LangChain
- What is LangChain and why do we need it?
- LangChain architecture and workflow
8.3 LangChain Components
- Chains: LLMChain, SimpleSequentialChain, SequentialChain, ConversationChain
- Prompt Templates: Standard, Few-shot, Zero-shot, and Custom templates
- Memory Types:
- ConversationBufferMemory
- ConversationSummaryMemory
- ConversationBufferWindowMemory
- VectorStoreRetrieverMemory
- Agents & Tools: How LangChain integrates tools for dynamic reasoning
- Document Loaders & Text Splitters: Preparing external knowledge for LLMs
8.4 Retrieval-Augmented Generation (RAG)
- What is RAG and why it’s needed
- Components of RAG:
- Retriever
- Vector Database (FAISS, Pinecone, Weaviate)
- LLM Integration for answering queries
- Workflow of RAG (Step-by-step explanation)
- Applications of RAG in chatbots, search, and knowledge assistants
8.5 Hands-on Applications
- Build a simple Q&A chatbot with LangChain
- Implement RAG with a vector database
Module 9: LlamaIndex & Fine-Tuning of LLMs
Objective: Understand how to structure data pipelines with LlamaIndex and customize LLMs through fine-tuning techniques.
9.1 Introduction to LlamaIndex
- What is LlamaIndex?
- Why LlamaIndex is used in GenAI applications
- High-level architecture of LlamaIndex
9.2 Components of LlamaIndex
- Data Connectors (ingesting data from different sources)
- Indexes (Vector Index, List Index, Tree Index, Keyword Table Index)
- Query Engines (retrieval mechanisms)
- Storage Context & Persistence
- Integration with LLMs
9.3 Introduction to Fine-Tuning LLMs
- What is Fine-Tuning?
- Why fine-tuning is needed in enterprise and domain-specific contexts
- Fine-tuning vs Prompt Engineering
9.4 Types of Fine-Tuning
- Full Fine-Tuning: Updating all model parameters
- Partial Fine-Tuning (PEFT): Updating only small parameter-efficient layers
- Popular PEFT Techniques:
- LoRA (Low-Rank Adaptation)
- QLoRA (Quantized LoRA for memory-efficient fine-tuning
9.5 Practical Insights
- Trade-offs between full and partial fine-tuning
- Choosing the right fine-tuning strategy for your use case
Module 10: Capstone Projects on Cloud Platforms
Objective: Apply everything learned to real-world Generative AI projects deployed on cloud platforms.
10.1 Journey Recap
- By this stage, learners would have already completed 25+ hands-on projects across ML(2), DL(5), NLP(4), Transformers (4), LLMs-LangChain (8), RAG (6), Llama index (4) and Fine-tuning. (1) apart from the Exercises.
- Now we focus on end-to-end Capstone Projects hosted on cloud platforms.
10.2 Capstone Project on AWS (Amazon Web Services)
- Introduction to AWS Cloud
- Overview of AWS services relevant to AI/ML application building:
- Compute: EC2, Lambda
- Storage: S3
- Other Integrations (API Gateway, CloudWatch)
- Capstone Project: Building and deploying a Generative AI Chatbot on AWS
10.3 Capstone Project on GCP (Google Cloud Platform)
- Introduction to Google Cloud
- Overview of GCP services relevant to AI/ML application building:
- Compute: Compute Engine, Cloud Run
- Storage: Cloud Storage (GCS)
- Databases: BigQuery
- AI/ML: Vertex AI
- Other Integrations (Pub/Sub, Cloud Functions)
- Capstone Project: Building and deploying a Generative AI Chatbot on GCP
Module 11: Prompt Engineering
Objective: To enable users to effectively communicate with and control AI models to achieve desired outcomes.
- Core prompting techniques
- Zero-shot prompting, Few-shot prompting, Chain-of-Thought (CoT) prompting, Tree-of-Thought (ToT) prompting
- Advanced prompting strategies
- Persona-based prompting, Meta-prompting, Constraint-based prompting, Negative prompting, Retrieval-Augmented Generation (RAG)
- Practical applications and real-world tools
- Text generation, Code assistance, Multimodal prompting, AI development platforms
Module 12: Agentic AI
Objective: Enabling systems to operate autonomously, make independent, complex decisions, adapt to changing environments, and solve multi-step problems with minimal human intervention.
- Introduction to Agentic AI
- What is Agentic AI?
- Core AI agent building blocks - Perception, Cognition/Reasoning, Planning, Action, Memory, Adaptability & learning:
- Agentic AI architectures and design patterns
- Architectural concepts
- Key design patterns - ReAct, Reflection, Tool Use & Human-in-the-Loop (HITL)
- Foundational frameworks and technologies
- LangChain and LangGraph, Crew AI & Multi-agent systems:
- Practical agent development and deployment
- Building agents with Python, Tools and APIs, No-code/low-code agent Development & Cloud deployment
- Responsible AI and evaluation
- Observability and evaluation. Ethical considerations and risk mitigation
- Future of Agentic AI & AGI
|
|
|
|
|
|