Creating Ai Assistants For It Infrastructure
Published 8/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 721.12 MB | Duration: 1h 33m
Published 8/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 721.12 MB | Duration: 1h 33m
Build LLM-powered AI assistants with LangChain, RAG, and AI agents to automate infrastructure and log analysis
What you'll learn
Integrate Large Language Models (LLMs) into IT automation workflows using Python and the LangChain framework.
Analyze network logs and retrieve knowledge from documentation and corporate databases with Retrieval-Augmented Generation (RAG)
Build and deploy AI agents capable of managing network equipment and assisting with incident diagnostics.
Work with both cloud-based and local LLMs (e.g., OpenAI, Ollama) to create AI assistants for network administration tasks
Requirements
Basic understanding of Python (writing and running simple scripts)
Familiarity with networking concepts (IP address, VLAN, switches)
Basic knowledge of IT administration (API usage, logs, CMDB role)
Experience using Linux (SSH connection, Bash commands, running scripts)
Basic Docker Compose commands (e.g., docker compose up -d, docker ps)
Description
In this course, you will learn how to apply modern Large Language Models (LLM) for automation through practical, hands-on cases from network infrastructure administration.Step by step, together we will integrate LLMs into traditional automation workflows using the LangChain framework, combining AI capabilities with proven automation practices. Along the way, you will gain skills in connecting LLMs to logging systems, retrieving data from knowledge bases, and orchestrating multiple automation tools through AI agents.By the end of the course, you will have created a ready-to-use AI assistant that:Communicates like ChatGPT, but with access to your internal documentationAssists in configuring network equipment for routine tasksAnalyzes logs and accelerates incident diagnosticsIntegrates with CMDB and other infrastructure toolsThis approach transforms the traditional human–machine interaction into a smooth human–human chat, where your infrastructure responds like a live assistant.The course is designed for network engineers, DevNetOps specialists, and IT administrators who want to bring AI into their workflows. With ~80% practice and ~20% theory, you will leave with working code, ready to adapt to your own environment, and a deep understanding of how to apply LLMs in real-world IT automation.As a result, you will be ready to implement AI automation in production!
Overview
Section 1: Introduction to LLM
Lecture 1 Introduction
Lecture 2 Setting up the working environment
Lecture 3 Getting started with OpenAI
Lecture 4 LangChain and Streamlit
Lecture 5 Local LLM
Section 2: Using LLM for Automation
Lecture 6 LangChain in Action
Lecture 7 Using RAG
Lecture 8 LLM Agents: Part 1
Lecture 9 LLM Agents: Part 2
Lecture 10 LLM Memory
Lecture 11 Troubleshooting and Tuning
Section 3: Conclusion
Lecture 12 Summing up
IT professionals seeking practical ways to apply LLMs in network administration,DevNetOps specialists looking to integrate LLMs into infrastructure workflows,Network engineers who want to automate routine tasks with AI tools,Automation engineers interested in building AI-powered assistants and agents,Anyone curious about combining AI + DevOps + Networking for next-generation IT automation