Generative Ai - Llm And Beyond
Published 8/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 5.98 GB | Duration: 11h 52m
Published 8/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 5.98 GB | Duration: 11h 52m
LLM Lifecycle, Prompt Engineering, LLM Properties, Fine-tuning, PEFT LORA, RLHF, RAG, PPO,DPO,ORPO, AI for Vision
What you'll learn
LLAMA 2
CHATGPT
LARGE LANGUAGE MODEL
PROMPT ENGINEERING
LLM FINE TUNING
RAG
RLHF
LLM USE CASES
LLM BASICS
LLM FOR EVERYONE
LLM Based chatbot
chatbot
Instruction fine tuning
in context learning
few shot inference
hallucination
Reinforcement learning from human feedback
Retrieval Augmentation Generation
Tools for reasoning
Agents
Augmentation
Automation
Transformers
GEN-AI
GENERATIVE AI
ARTIFICIAL INTELLIGENCE
DATA SCIENCE
MACHINE LEARNING
DEEP LEARNING
LANGCHAIN
LAMMAINDEX
Low-Rank Adaptation
LORA
METRICS
PPO
DPO
ORPO
PDF RAG
CSV RAG
Requirements
PYTHON
NLP
MACHINE LEARNING BASICS
Description
Generative AI: From Fundamentals to Advanced ApplicationsThis comprehensive course is designed to equip learners with a deep understanding of Generative AI, particularly focusing on Large Language Models (LLMs) and their applications. You will delve into the core concepts, practical implementation techniques, and ethical considerations surrounding this transformative technology.What You Will Learn:Foundational Knowledge: Grasp the evolution of AI, understand the core principles of Generative AI, and explore its diverse use cases.LLM Architecture and Training: Gain insights into the architecture of LLMs, their training processes, and the factors influencing their performance.Prompt Engineering: Master the art of crafting effective prompts to maximize LLM capabilities and overcome limitations.Fine-Tuning and Optimization: Learn how to tailor LLMs to specific tasks through fine-tuning and explore techniques like PEFT and RLHF.RAG and Real-World Applications: Discover how to integrate LLMs with external knowledge sources using Retrieval Augmented Generation (RAG) and explore practical applications.Ethical Considerations: Understand the ethical implications of Generative AI and responsible AI practices.By the end of this course, you will be equipped to build and deploy robust Generative AI solutions, addressing real-world challenges while adhering to ethical guidelines. Whether you are a data scientist, developer, or business professional, this course will provide you with the necessary skills to thrive in the era of Generative AI.Course Structure:The course is structured into 12 sections, covering a wide range of topics from foundational concepts to advanced techniques. Each section includes multiple lectures, providing a comprehensive learning experience.Section 1: Introduction to Generative AISection 2: LLM Architecture and ResourcesSection 3: Generative AI LLM LifecycleSection 4: Prompt Engineering SetupSection 5: LLM PropertiesSection 6: Prompt Engineering Basic GuidelinesSection 7: Better Prompting TechniquesSection 8: Full Fine TuningSection 9: PEFT - LORASection 10: RLHFSection 11: RAGSection 12: Generative AI for Vision (Preview)
Overview
Section 1: Introduction
Lecture 1 What is Generative AI
Lecture 2 What was before GENAI
Lecture 3 GEN AI TOOLS
Lecture 4 Better use of GEN AI
Lecture 5 GENAI USE CASE WRITING
Lecture 6 GEN AI Reading use cases
Lecture 7 gen AI Usecase chatting
Lecture 8 How to get Better Results from LLM
Lecture 9 Responsible AI
Section 2: LLM Shape size Resources needs
Lecture 10 Augmentation vs Automation
Lecture 11 The Kalpan Paper
Lecture 12 The Chinchilla Paper
Lecture 13 Transformers
Section 3: Generative AI LLM lifecycle
Lecture 14 GEN AI LIFE CYCLE
Lecture 15 RAG INTRO
Lecture 16 Fine tuning model intuition
Lecture 17 RLHF INTUTION
Lecture 18 Tools & Agents
Section 4: Prompt Engineering - set up and Prompt template
Lecture 19 Prompt Engineering - Introduction
Lecture 20 LLM configuration parameters
Lecture 21 Lecture 2: Llama 2 vs Llama 2 chat
Lecture 22 Set up using Lamma 2
Section 5: LLM Properties
Lecture 23 Stateless LLMs
Lecture 24 Base LLM VS Fine Tuned LLM
Lecture 25 System Prompts
Lecture 26 Quantized models
Lecture 27 Quantized Models Notebook
Lecture 28 AWQ SETUP and usage of notebook
Section 6: Prompt Engineering Basic Guidelines
Lecture 29 Check Conditions & assumptions
Lecture 30 Clear Instructions & Delimiters
Lecture 31 Specific Output Structure
Lecture 32 Few Shot Prompting
Lecture 33 Give time to think
Lecture 34 Hallucination
Section 7: Better Prompting Techniques
Lecture 35 Iterative Prompting
Lecture 36 Issues While summarizing
Lecture 37 summarize
Lecture 38 Inference
Lecture 39 Transformation
Lecture 40 Expanding
Lecture 41 Prompt Tuning
Section 8: Full Fine Tuning
Lecture 42 LLM FINE TUNING
Lecture 43 GLUE SUPER GLUE
Lecture 44 HELM
Lecture 45 LLM FINE TUNING Implementation
Section 9: PEFT - LORA
Lecture 46 PEFT
Lecture 47 QLORA
Lecture 48 PEFT Implementation
Section 10: RLHF
Lecture 49 PPO
Lecture 50 DPO VS ORPO
Section 11: RAG
Lecture 51 Using Langchain with Ollama to perform RAG with PDFs
Lecture 52 RAG With CSV File
Section 12: GEN AI for Vision - up next
Lecture 53 Image prompt engineering
Lecture 54 Stable Diffusion
Lecture 55 Stable diffusion model train methods
Lecture 56 Stable Diffusion Resources
Lecture 57 FORGE setup
DATA SCIENTISTS,ML Practitioners