Tags
Language
Tags
September 2024
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 1 2 3 4 5

Hands-on Generative AI Engineering with Large Language Model

Posted By: lucky_aut
Hands-on Generative AI Engineering with Large Language Model

Hands-on Generative AI Engineering with Large Language Model
Published 8/2024
Duration: 6h18m | .MP4 1280x720, 30 fps(r) | AAC, 44100 Hz, 2ch | 2.76 GB
Genre: eLearning | Language: English

Implementing Transformer, Training, Fine-tuning | GenAI applications: AI Assistant, Chatbot, RAG, Agent | Deployment


What you'll learn
Understanding how to build, implement, train, and perform inference on a Large Language Model, such as Transformer (Attention Is All You Need) from scratch.
Gaining knowledge of the different components, tools, and frameworks required to build an LLM-based application.
Learning how to serve and deploy your LLM-based application from scratch.
Engaging in hands-on technical implementations: Notebook, Python scripts, building model as as Python package, train, infer, fine-tune, deploy & more.
Receiving guidance on advanced engineering topics in Generative AI with Large Language Models.

Requirements
No prior experience in Generative AI, Large Language Models, Natural Language Processing, or Python is needed. This course will provide you with everything you need to enter this field with enthusiasm and curiosity. Concepts and components are first explained theoretically and through documentation, followed by hands-on technical implementations. All code snippets are explained step-by-step, with accompanying Notebook playgrounds and complete Python source code, structured to ensure a clear and comprehensive understanding.

Description
Dive into the rapidly evolving world of Generative AI with our comprehensive course, designed for learners eager to build, train, and deploy Large Language Models (LLMs) from scratch.
This course equips you with a wide range of tools, frameworks, and techniques to create your GenAI applications using Large Language Models, including Python, PyTorch, LangChain, LlamaIndex, Hugging Face, FAISS, Chroma, Tavily, Streamlit, Gradio, FastAPI, Docker, and more.
This hands-on course covers essential topics such as implementing Transformers, fine-tuning models, prompt engineering, vector embeddings, vector stores, and creating cutting-edge AI applications like AI Assistants, Chatbots, Retrieval-Augmented Generation (RAG) systems, autonomous agents, and deploying your GenAI applications from scratch using REST APIs and Docker containerization.
By the end of this course, you will have the practical skills and theoretical knowledge needed to engineer and deploy your own LLM-based applications.
Let's look at our table of contents:
Introduction to the Course
Course Objectives
Course Structure
Learning Paths
Part 1: Software Prerequisites for Python Projects
IDE
VS Code
PyCharm
Terminal
Windows: PowerShell, etc.
macOS: iTerm2, etc.
Linux: Bash, etc.
Python Installation
Python installer
Anaconda distribution
Python Environment
venv
conda
Python Package Installation
PyPI, pip
Anaconda, conda
Software Used in This Course
Part 2: Introduction to Transformers
Introduction to NLP Before and After the Transformer’s Arrival
Mastering Transformers Block by Block
Transformer Training Process
Transformer Inference Process
Part 3: Implementing Transformers from Scratch with PyTorch
Introduction to the Training Process Implementation
Implementing a Transformer as a Python Package
Calling the Training and Inference Processes
Experimenting with Notebooks
Part 4: Generative AI with the Hugging Face Ecosystem
Introduction to Hugging Face
Hugging Face Hubs
Models
Datasets
Spaces
Hugging Face Libraries
Transformers
Datasets
Evaluate, etc.
Practical Guides with Hugging Face
Fine-Tuning a Pre-trained Language Model with Hugging Face
End-to-End Fine-Tuning Example
Sharing Your Model
Part 5: Components to Build LLM-Based Web Applications
Backend Components
LLM Orchestration Frameworks: LangChain, LlamaIndex
Open-Source vs. Proprietary LLMs
Vector Embedding
Vector Database
Prompt Engineering
Frontend Components
Python-Based Frontend Frameworks: Streamlit, Gradio
Part 6: Building LLM-Based Web Applications
Task-Specific AI Assistants
Culinary AI Assistant
Marketing AI Assistant
Customer AI Assistant
SQL-Querying AI Assistant
Travel AI Assistant
Summarization AI Assistant
Interview AI Assistant
Simple AI Chatbot
RAG (Retrieval-Augmented Generation) Based AI Chatbot
Chat with PDF, DOCX, CSV, TXT, Webpage
Agent-Based AI Chatbot
AI Chatbot with Math Problems
AI Chatbot with Search Problems
Part 7: Serving LLM-Based Web Applications
Creating the Frontend and Backend as Two Separate Services
Communicating Between Frontend and Backend Using a REST API
Serving the Application with Docker
Install, Run, and Enable Communication Between Frontend and Backend in a Single Docker Container
Use Case
An LLM-Based Song Recommendation App
Conclusions and Next Steps
What We Have Learned
Next Steps
Thank You
Who this course is for:
Beginner Python developers and AI/ML engineers who are curious about Generative AI, Large Language Models, and building applications using the latest AI technologies.
Individuals from other backgrounds or domains who are interested in switching their careers to focus on Generative AI, particularly Large Language Models.
Non-technical individuals who want to gain not only hands-on technical experience but also a high-level overview of this fast-growing field, making it easier for them to follow along and understand the key concepts.

More Info