Getting Started with Docker and AI

Posted By: IrGens

Getting Started with Docker and AI
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 1h 11m | 174 MB
Instructor: Nigel Poulton

Learn how to run large language models locally using Docker Model Runner. This course will teach you how to install, configure, and integrate Docker Model Runner with Compose apps and third-party AI tools.

What you'll learn

Running AI models locally can be complex without the right tools and integrations. In this course, Getting Started with Docker and AI, you’ll gain the ability to run large language models (LLMs) locally using Docker Model Runner as part of your Docker workflow.

First, you’ll explore how Docker Model Runner works, its architecture, and how it integrates with OpenAI-compatible apps. Next, you’ll discover how to install Docker Model Runner, manage models, and interact with them via Docker Hub and the command line. Finally, you’ll learn how to integrate Docker Model Runner into Compose apps and third-party tools to build local AI-powered applications, including private chatbots similar to ChatGPT and Claude.

When you’re finished with this course, you’ll have the skills and knowledge needed to confidently run and integrate AI models locally with Docker.