Prompt Engineering And Generative Ai - Fundamentals

Posted By: ELK1nG

Prompt Engineering And Generative Ai - Fundamentals
Published 3/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 994.65 MB | Duration: 1h 32m

Large Language Models, GPT, Gemini, LLM fine-tuning, Few-Shot, Chain-of-Thought, Tree-of-Thoughts, Guardrails, Langchain

What you'll learn

Fundamentals of Prompt Engineering and Generative AI.

Prompt Engineering Techniques : Zero-Shot, Few-Shot and Chain-of-Thought, Tree of Thoughts

Retrieval Augmented Generation fundamentals

RAGAS Evaluation Framework for LLM and LangSmith

Fine-tuning a Large Language Model

Guardrails for validating LLM response

Requirements

Basic knowledge of data science and ML principles will be helpful

Familiarity with Python

A computer with internet to access course material

Description

This course delves into the fundamental concepts related to Prompt Engineering and Generative AI.  The course has subsections on Fundamentals of Prompt Engineering, Retrieval Augmented Generation, Fine-tuning a large language model (LLM) and Guardrails for LLM. Section on Prompt Engineering Fundaments :The first segment provides a definition of prompt engineering, best practices of prompt engineering and an example of a prompt given to the Gemini-Pro model with references for further reading.The second segment explains what streaming a response is from a large language model, examples of providing specific instructions to the Gemini-Pro model as well as temperature and token count parameters. The third segment explains what Zero-Shot Prompting technique is with examples using the Gemini Model. The fourth segment explains Few-shot and Chain-of-Thought Prompting techniques with examples using the Gemini Model. Subsequent segments in this section shall discuss setting up the Google Colab notebook to work with the GPT model from OpenAI and provide examples of Tree-of-Thoughts prompting technique, including the Tree-of-Thoughts implementation from Langchain to solve the 4x4 Sudoku Puzzle. Section on Retrieval Augmented Generation (RAG) :In this section, the first segment provides a definition of Retrieval Augmented Generation Prompting technique, the merits of Retrieval Augmented Generation and applying Retrieval Augmented Generation to a CSV file, using the Langchain framework In the second segment on Retrieval Augmented Generation, a detailed example involving the Arxiv Loader, FAISS Vector Database and a Conversational Retrieval Chain is shown as part of the RAG pipeline using Langchain framework.In the third segment on Retrieval Augmented Generation, evaluation of response from a Large Language Model (LLM) using the RAGAS framework is explained. In the fourth segment on Retrieval Augmented Generation, the use of Langsmith is shown complementing the RAGAS framework for evaluation of LLM response. In the fifth segment, use of the Gemini Model to create text embeddings and performing document search is explained. Section on Large Language Model Fine-tuning  :In this section, the first segment provides a summary of prompting techniques with examples involving LLMs from Hugging Face repository and explaining the differences between prompting an LLM and fine-tuning an LLM. The second segment provides a definition of fine-tuning an LLM, types of LLM fine-tuning and extracting the data to perform EDA (including data cleaning) prior to fine-tuning an LLM. Third segment explains fine-tuning a pre-trained large language model on a task specific labeled dataset in detail. Section on Guardrails for Large Language Models:In this section, the first segment provides a definition of Guardrails as well as examples of Guardrails from OpenAI. In the second segment on Guardrails, examples of open source Guardrail implementations are discussed with a specific focus on GuardrailsAI for extracting information from text.In the third section, use of GuardrailsAI for generating structured data and interfacing GuardrailsAI with a Chat Model have been explained. Each of these segments has a Google Colab notebook included.

Overview

Lecture 1 Introduction

Section 1: Prompt Engineering Fundamentals

Lecture 2 Fundamentals of Prompt Engineering - Set up Colab Notebook with Gemini Model

Lecture 3 Streaming a response from LLM and Best Practices in Prompt Engineering

Lecture 4 Zero Shot Prompting Technique

Lecture 5 Few-shot and Chain-of-thought prompting technique

Lecture 6 Setup the model from OpenAI -GPT 4 : Example of a Prompt

Lecture 7 Tree-of-thoughts prompting technique using GPT-3.5-turbo

Lecture 8 Tree-of-Thoughts agents implemented by LangChain

Section 2: Retrieval Augmented Generation

Lecture 9 RAG Pipeline : Chroma Vector Store, Conversational Retrieval Chain with CSV file

Lecture 10 RAG Pipeline : FAISS Vector DB, Arxiv Loader and Conversational Retrieval Chain

Lecture 11 Retrieval Augmented Generation Assessment : RAGAS Framework

Lecture 12 Retrieval Augmented Generation Assessment with LangSmith

Lecture 13 Document Search with the Gemini Model

Section 3: Large Language Model Fine-tuning

Lecture 14 Prompting vs Fine-tuning a Large Language Model

Lecture 15 Fine-tuning a Large Language Model - Setting up the Colab Notebook and EDA

Lecture 16 Fine-tuning a Large Language Model - Model Training and Inference

Section 4: Guardrails for Large Language Models

Lecture 17 Guardrails for Large Language Models : Examples from OpenAI

Lecture 18 GuardrailsAI : Extracting Information from Text

Lecture 19 Guardrails AI : Generating Structured Data and Interface with Chat Model

This course is suited for anyone interested in the realm of Natural Language Processing, Large Language Models, Prompting Engineering and Generative AI and Data Science