Tags
Language
Tags
May 2025
Su Mo Tu We Th Fr Sa
27 28 29 30 1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
    Attention❗ To save your time, in order to download anything on this site, you must be registered 👉 HERE. If you do not have a registration yet, it is better to do it right away. ✌

    ( • )( • ) ( ͡⚆ ͜ʖ ͡⚆ ) (‿ˠ‿)
    SpicyMags.xyz

    Train Opensource Large Language Models From Zero To Hero

    Posted By: ELK1nG
    Train Opensource Large Language Models From Zero To Hero

    Train Opensource Large Language Models From Zero To Hero
    Published 9/2024
    MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
    Language: English | Size: 1.81 GB | Duration: 2h 36m

    How to train Open Source LLMs with LoRA QLoRA, DPO and ORPO.

    What you'll learn

    What is language model and how the training pipeline looks like

    Fine tuning LLMs with supervised fine-tune (LoRA, QLoRA, DoRA)

    Align LLMs to human preference using DPO, KTO and ORPO

    Accelerate LLM training with multiple GPUs training and Unsloth library

    Requirements

    No prior knowledge is required

    Description

    Unlock the full potential of Large Language Models (LLMs) with this comprehensive course designed for developers and data scientists eager to master advanced training and optimization techniques.I'll cover everything from A to Z, helping developers understand how LLMs works and data scientists learn simple and advance training techniques. Starting with the fundamentals of language models and the transformative power of the Transformer architecture, you'll set up your development environment and train your first model from scratch.Dive deep into cutting-edge fine-tuning methods like LoRA, QLoRA, and DoRA to enhance model performance efficiently. Learn how to improve LLM robustness against noisy data using techniques like Flash Attention and NEFTune, and gain practical experience through hands-on coding sessions.The course also explores aligning LLMs to human preferences using advanced methods such as Direct Preference Optimization (DPO), KTO, and ORPO. You'll implement these techniques to ensure your models not only perform well but also align with user expectations and ethical standards.Finally, accelerate your LLM training with multi-GPU setups, model parallelism, Fully Sharded Data Parallel (FSDP) training, and the Unsloth framework to boost speed and reduce VRAM usage. By the end of this course, you'll have a good understanding and practical experience to train, fine-tune, and optimize robust open-source LLMs.

    Overview

    Section 1: What is a Language Model and how training pipeline looks like

    Lecture 1 Introduction to Training Language Models

    Lecture 2 The Transformer Model: Unlocking the Power of Deep Learning

    Lecture 3 Transformer Architectures for Large Language Models

    Section 2: Setup your environment and train you first Language Model

    Lecture 4 Training a Language Model from scratch

    Lecture 5 Setting up your development environment

    Section 3: Fine tuning LLMs with supervised fine-tune (LoRA, QLoRA, DoRA)

    Lecture 6 Supervised Fine-Tuning of LLMs with LoRA and intro to quantization

    Lecture 7 Train LLM full supervised tuning
    
    Lecture 8 Train LLM with freezed params [code]
    
    Lecture 9 Training LLM with LoRA [code]
    
    Lecture 10 Introducing Quantized LoRA (QLoRA)
    
    Lecture 11 Training LLM with QLoRA [code]
    
    Lecture 12 Introduction to DoRA fine tuning
    
    Lecture 13 DoRA training to improve stability [code]
    
    Section 4: Improve LLM performance and make training Robust to noisy data
    
    Lecture 14 Enhancing Speed with Flash Attention
    
    Lecture 15 NEFTune - Making LLM training Robust
    
    Lecture 16 Enhancing LLM robustness and training speed [code]
    
    Section 5: Align LLMs to human preference using DPO, KTO and ORPO
    
    Lecture 17 Introduction to Direct Preference Optimization (DPO)
    
    Lecture 18 DPO training align LLM to human preference [code]
    
    Lecture 19 Easier Data Curation for Training LLMs with KTO
    
    Lecture 20 KTO training for better data curation [code]
    
    Lecture 21 All in one training with ORPO
    
    Lecture 22 All in one training with ORPO [code]
    
    Section 6: Accelerate LLM Training
    
    Lecture 23 Multi-GPU Training - Accelerate Deep Learning
    
    Lecture 24 Multi GPU model parallel [code]
    
    Lecture 25 FSDP GPU training [code]
    
    Lecture 26 Unsloth - A framework for faster fine tuning
    
    Lecture 27 Unsloth training improve speed and VRAM [code]
    
    Developers, Data scientists, AI enthusiasts[/code][/code][/code][/code][/code][/code][/code][/code][/code][/code][/code]