Designing Ml Solutions On Azure & Preparing For Dp-100 Exam

Posted By: ELK1nG

Designing Ml Solutions On Azure & Preparing For Dp-100 Exam
Published 6/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 15.36 GB | Duration: 22h 42m

Design, Train & Deploy ML Models on Azure using AutoML, Pipelines, MLOps, and LLMs with Prompt Engineering & RAG

What you'll learn

Learn how to architect ML workflows using Azure services, from data ingestion to model deployment.

Create, configure, and manage workspaces, datastores, compute targets, and environments.

Use Azure Notebooks and Synapse Spark to clean, transform, and explore datasets.

Train models automatically for tabular, vision, and NLP tasks while applying responsible AI guidelines.

Perform hyperparameter tuning using Bayesian optimization, random search, and early stopping.

Record model training runs, metrics, parameters, and artifacts for robust experimentation tracking.

Design modular ML pipelines that can be automated, reused, and scaled in production.

Serve real-time and batch predictions using Azure endpoints with appropriate compute configurations.

Apply fairness, explainability, and model management best practices throughout the ML lifecycle.

Fine-tune, prompt-engineer, and deploy LLMs using Azure OpenAI, Prompt Flow, and Retrieval Augmented Generation (RAG).

Requirements

Familiarity with supervised and unsupervised learning, algorithms (e.g., regression, classification), and model evaluation metrics.

Ability to write and understand basic Python code, especially using data science libraries like pandas, scikit-learn, numpy, and matplotlib.

Experience with data preprocessing, feature engineering, model training, and validation.

General understanding of cloud concepts and services, particularly within the Azure ecosystem.

Basic experience using notebooks for exploratory data analysis and model training.

Basic knowledge of Git for managing code and experiments is helpful for working in collaborative environments.

Understanding of concepts like mean, variance, correlation, and statistical significance will help in model evaluation and feature analysis.

Familiarity with metrics like accuracy, precision, recall, F1 score, and ROC-AUC, especially for classification and regression problems.

Knowledge of REST APIs can be helpful when deploying and interacting with machine learning models via endpoints.

Some tasks may require basic use of the terminal (e.g., starting compute instances, navigating directories).

Machine learning is iterative—students should be ready to test, fail, and improve their models continuously.

Critical thinking skills are important for choosing algorithms, designing experiments, and interpreting results.

Description

Build and Deploy Intelligent Machine Learning Solutions Using Microsoft AzureThis course is your complete guide to mastering data science workflows in the cloud. Designed for professionals who want to go beyond experimentation and take their machine learning models into production, it covers every stage of the ML lifecycle using Azure’s powerful suite of tools.Whether you're looking to scale your data science capabilities, prepare for the DP-100 certification, or enhance your organization’s AI capabilities, this course delivers hands-on experience with the platforms and practices used in real-world enterprise environments.You will gain hands-on expertise in:Designing effective ML architectures on AzureChoosing the right dataset formats and compute targetsStructuring experiments for scalability and performanceIntegrating Git and CI/CD pipelines for streamlined collaborationPreparing and managing data at scaleWrangling and transforming data using notebooks and Synapse SparkAccessing and versioning datasets via Azure ML datastoresBuilding and sharing environments across workspacesTraining models using both automated and custom approachesLeveraging AutoML for classification, regression, vision, and NLPDeveloping custom training scripts using Python and MLflowTuning hyperparameters for optimal model performanceBuilding and managing reproducible ML pipelinesCreating modular training componentsPassing and transforming data between pipeline stepsScheduling, monitoring, and debugging workflowsDeploying models for real-time and batch inferenceConfiguring online endpoints for scalable predictionsSetting up batch endpoints for large-scale processing jobsImplementing secure and compliant deployment workflowsOptimizing advanced AI models and LLMsSelecting and fine-tuning large language modelsDesigning prompt engineering strategies for accuracy and contextImplementing Retrieval Augmented Generation (RAG) systemsEnsuring responsible AI and operational excellenceApplying fairness, transparency, and explainability principlesUsing MLflow for experiment tracking and model governanceAutomating retraining and monitoring in productionIf you’re ready to move beyond theory and start building machine learning systems that solve real business problems, this course is designed for you. It’s perfect for learners who want structured guidance, practical tools, and hands-on labs that mirror what professionals do in industry every day.

Overview

Section 1: Module 1 - Lesson 1 - What is Azure Machine Learning

Lecture 1 Introduction to Azure ML as a cloud-based platform for Scalability

Lecture 2 The Benefits of Scalability, Automation, Managed infrastructure, MLOps readiness

Lecture 3 Use cases across industries

Section 2: Module 1 - Chapter 2 - Azure ML Architecture Deep Dive

Lecture 4 Core architecture workspace, compute, storage, environments, models

Lecture 5 How do these pieces connect inside Azure ?

Lecture 6 Integration with other services like Key Vault and Application Insights

Section 3: Module 1 - Chapter 3 - Navigating the Azure ML Studio Interface

Lecture 7 Guided walkthrough of Azure ML Studio

Lecture 8 Explore sections Experiments, Pipelines, Models, Datasets, Compute and Endpoints

Lecture 9 Navigating the Azure ML Studio Interface - DEMO

Section 4: Module 1 - Chapter 4 - Workspace Resources and Asset Types

Lecture 10 Understand what’s inside a workspace experiments, compute targets, environments

Lecture 11 How each resource is used in the ML lifecycle

Lecture 12 Compare Azure ML Studio, Azure Portal, CLI, and SDK

Section 5: Module 1 - Chapter 5- Working with Visual Studio Code & Azure ML

Lecture 13 Working with Visual Studio Code & Azure ML

Lecture 14 Install Azure ML extension, connect to workspace and open a notebook

Section 6: Module 1 - Chapter 6 - Understanding Workspace Editions

Lecture 15 Difference between Basic and Enterprise editions

Lecture 16 What’s included in each (e.g., Designer, AutoML, Responsible AI tools )

Lecture 17 Which features are relevant for DP-100

Section 7: Module 1 - Chapter 7 - Creating an Azure ML Workspace

Lecture 18 Azure Ml Studio-Workspace Creation

Lecture 19 Verify provisioned resources storage, key vault, app insights

Lecture 20 Azure ML Studio-UI Navigation

Section 8: Module 1 - Chapter 8 - Creating Compute Resources in Azure ML

Lecture 21 Compute Instance and a Compute Cluster Creation

Lecture 22 Explain size options, autoscaling, and cost considerations_

Lecture 23 Jupyter notebook in Compute Instance, access workspace with Python SDK, list

Section 9: Module 1- Chapter 9 - Exploring Azure ML with the CLI

Lecture 24 Exploring Azure ML with the CLI

Lecture 25 Discuss how the CLI can be useful in scripting and CI\CD

Section 10: DP-100 Module 1 Quiz – Azure Machine Learning Fundamentals

Section 11: Module 2 - Chapter 1- Introduction to Azure ML Designer

Lecture 26 What is Azure ML Designer

Lecture 27 Key benefits no-code pipeline creation, drag-and-drop interface, easy experiment

Lecture 28 When and why to use Designer over code-based solutions_

Lecture 29 Use cases and suitability for different skill levels

Section 12: Module 2 - Chapter 2 - Exploring the Designer Interface

Lecture 30 Overview of key sections canvas, module toolbox, input & output panels

Lecture 31 Exploring The Designer Interface

Section 13: Module 2- Chapter 3 - No-Code vs Code-Based Machine Learning

Lecture 32 Comparison Designer vs. Python SDK

Lecture 33 Pros and cons of each for different scenarios

Lecture 34 When is no-code ML best suited (business analysts, POCs, quick model testing)

Section 14: Module 2 - Chapter 4 - Building a Training Pipeline with Designer

Lecture 35 Concept of a training pipeline – data input, preprocessing, training, evaluation

Lecture 36 Importing a Sample Dataset & Building ML Pipeline

Lecture 37 Running the Pipeline and reviewing experiment results

Section 15: Module 2 - Chapter 5 - Interpreting Experiment Results in Designer

Lecture 38 Understand module run statuses, output visualizations, and evaluation metrics

Lecture 39 Viewing Metrics MAE, RMSE or Accuracy from the “Evaluate Model” module

Section 16: Module 2 - Chapter 6 - Creating an Inference Pipeline from Training Pipeline

Lecture 40 What is an inference pipeline ?

Lecture 41 Difference between training and inference flows

Lecture 42 Use the _Create Inference Pipeline_ button in Designer to convert a completed

Lecture 43 Add adjust Web Service Input Output modules

Section 17: Module 2- Chapter 7 - Real-Time vs. Batch Inference in Designer

Lecture 44 Concepts Real-time inference Batch inference

Lecture 45 Which is better when Business use case comparison ?

Section 18: Module 2 - Chapter 8 - Deploying a Model with Designer to ACI or AKS

Lecture 46 Overview-Deploying a Model with Designer to ACI or AKS

Section 19: DP-100 Module 2 Quiz-Azure ML Designer

Section 20: Module 3 - Section 1. What Are Experiments and Runs in Azure ML?

Lecture 47 How Azure ML tracks experiment metadata, source code, outputs, and metrics

Lecture 48 Introduce the concept of a “run” (single execution of a training script)

Lecture 49 Importance of tracking for versioning, auditing, and reproducibility

Section 21: Module 3 - Section 2- Anatomy of a Training Run in Azure ML

Lecture 50 What happens when you submit a script to Azure ML Part-1

Lecture 51 What happens when you submit a script to Azure ML Part-2

Lecture 52 SDK Overview

Lecture 53 SDK methods

Lecture 54 SDK v1 with minimal script

Lecture 55 SDK v2 with minimal script

Lecture 56 What is a registered model and why it matters

Section 22: Module 3 -Section 3- Logging Metrics and Monitoring Runs

Lecture 57 Why and how to log metrics (accuracy, loss, etc.) from your script using

Lecture 58 View metrics in Azure ML Studio’s Run Details panel

Lecture 59 How to troubleshoot failed runs using stdout, stderr, and .txt logs

Lecture 60 Lab Continuation

Section 23: Module 3 - Section 4. Using Compute Targets: Local vs. Remote

Lecture 61 When to use Compute Instance & Compute Cluster

Lecture 62 How to specify compute targets in SDK

Lecture 63 Updating the training script - Submitting to a cluster .mp4

Section 24: Module 3 - Section 5. Experimentation Best Practices

Lecture 64 Use descriptive experiment names and tags

Lecture 65 Keep training scripts modular and environment-specific

Lecture 66 Track versions of code and data

Lecture 67 Clean up old resources and runs regularly

Section 25: DP-100 Modlule 3 Quiz- Azure ML Experimentation, Metrics, and Compute

Section 26: Module 4 -Section 1 - Introduction to Data Management in Azure ML

Lecture 68 Importance of data in ML workflows

Lecture 69 AzureML approach to data central, reusable, versioned

Lecture 70 Overview of Datastores and Datasets

Lecture 71 Lab Working with Data Assets through UI

Section 27: Module 4 -Section 2 - Understanding Datastores in Azure ML

Lecture 72 What is a Datastore- secure abstraction over storage (Blob, ADLS, local, etc)

Lecture 73 Default datastore vs. custom datastore

Lecture 74 Why datastores matter-consistent paths across compute environments

Lecture 75 Authentication methods_ SAS, Account Key, Managed Identity

Section 28: Module 4 - Section 3. Registering and Using Datastores

Lecture 76 Working with Datastores

Lecture 77 Working with Datastores - Live

Section 29: Module 4 - Section 4 - Creating and Registering Datasets in Azure ML

Lecture 78 Working with Datasets - Theory

Lecture 79 Lab B Working with Datasets and Data Assets 1

Lecture 80 Lab B Working with Datasets and Data Assets 2

Section 30: Module 4 - Section 5 - Mounting vs. Downloading Data

Lecture 81 How datasets are consumed by compute

Lecture 82 When to use each mode based on workload and dataset size

Lecture 83 Lab Mounting vs Downloading Data

Section 31: Module 4 -Section 6 - Best Practices for Managing Data in Azure ML

Lecture 84 Use consistent naming and versioning

Lecture 85 Storeraw, processed, and training-ready data separately

Lecture 86 Keep training code and data loosely coupled (via inputs)

Lecture 87 Cleanup unused datasets and large blobs

Section 32: Module 4- Managing Data and Datastores in Azure ML - Quiz

Section 33: Module 5: Section 1. Introduction to Compute in Azure ML

Lecture 88 What is a Compute Target in Azure ML

Lecture 89 Key Types-Compute Instance and Compute Cluster

Lecture 90 Use case examples and cost considerations

Lecture 91 Inference compute (AKS_ACI)

Section 34: Module 5: Section 2. Compute Instances vs. Compute Clusters

Lecture 92 Feature comparison- Instances and Clusters

Lecture 93 DEMO - Working with Compute

Section 35: Module 5: Section 3. Attached Compute (Advanced Concepts)

Lecture 94 2 When to use - hybrid pipelines, data proximity, existing infrastructure

Section 36: Module 5: Section 4. Environments in Azure ML: What and Why

Lecture 95 Defining an Environment in Azure ML

Lecture 96 Curated environments VS Custom environments

Lecture 97 Importance of reproducibility in training

Section 37: Module 5: Section 5 -Creating Custom Environments

Lecture 98 Creating Custom Environment - Theory

Lecture 99 LAB05A-Working with Environments -PIP

Section 38: Module 5- Section 6- Submitting Jobs to Compute Clusters

Lecture 100 LAB05B-Working with Compute Targets

Lecture 101 What to do if - Run fails at install step (Troubleshooting)

Section 39: Module 5-Azure ML Compute & Environments- Quiz

Section 40: Module 6- Section 1- What Is an ML Pipeline in Azure ML?

Lecture 102 Definition-What is a Pipeline

Lecture 103 Why Pipelines Matters ?

Lecture 104 Difference between one-off experiments and structured pipelines

Lecture 105 Examples_ data cleaning → training → evaluation → registration

Section 41: Module 6 - Section 2. Components of a Pipeline Step

Lecture 106 What each steps needs in a Pipeline ?

Lecture 107 Dataflows between steps via Pipeline Data or output folders

Lecture 108 Managing inter-step dependencies

Section 42: Module 6 - Section 3. Creating a Simple Two-Step Pipeline

Lecture 109 LAB06A-Creating a two-step pipeline - PART 1

Lecture 110 LAB06A-Creating a two-step pipeline - PART 2

Lecture 111 LAB06A-Creating a two-step pipeline - PART 3

Lecture 112 LAB06A-Creating a two-step pipeline - PART 4

Lecture 113 LAB06A-Creating a two-step pipeline - PART 5

Section 43: Module 6 - Section 4 - Passing Data Between Pipeline Steps

Lecture 114 Using Pipeline Data v1 or named Input Outputs v2

Lecture 115 Ensuring data outputs from one step are available to the next

Lecture 116 Actual Data Handling between each Pipeline Step’s Execution

Section 44: Module 6 - Section 5. Publishing Pipelines for Reuse

Lecture 117 Publishing Pipelines for Reuse-Theory

Lecture 118 LAB06B-Publishing Pipelines for Reuse - Part 1

Lecture 119 LAB06B-Publishing Pipelines for Reuse - Part 2

Section 45: Module 6 - Section 6. Pipeline Scheduling and Automation Options

Lecture 120 Scheduling_ run pipelines daily, weekly, etc

Lecture 121 Integration Options

Lecture 122 Use cases automated retraining, batch scoring workflows

Section 46: Module 6 - Section 7. Best Practices for Pipelines

Lecture 123 Reuse steps as components

Lecture 124 Version your pipeline scripts and datasets

Lecture 125 Monitor each step independently

Lecture 126 Use consistent naming and tagging for traceability

Section 47: Module 6- ML Pipelines in Azure ML - Quiz

Section 48: Module 7 - Overview of Deployment Targets in Azure ML

Lecture 127 Azure Container Instances

Lecture 128 Azure Kubernetes Services

Lecture 129 Managed Online Endpoints Serverless, scalable, easier setup

Lecture 130 Components needed for deployment

Lecture 131 How Azure ML wraps these into a deployable container

Section 49: Module 7 - Creating a Real-time Inference Endpoint

Lecture 132 LAB07A-Creating a realtime inference endpoint - Part 1

Lecture 133 LAB07A-Creating a realtime inference endpoint - PART 2

Lecture 134 LAB07A-Creating a real time inference endpoint - PART 3

Section 50: Module 7 - Consuming Real-time Endpoints via REST API

Lecture 135 Authentication options- ○Endpoint key ○AzureML token

Lecture 136 How to format JSON request payload

Lecture 137 Handle response and error formats

Section 51: Module 7 - Creating a Batch Inference Pipeline

Lecture 138 LAB07B-Creating a batch inference service - PART 1

Lecture 139 LAB07B-Creating a batch inference service - PART 2

Lecture 140 LAB07B-Creating a batch inference service - PART 3

Lecture 141 LAB07B(option2)-Creating a batch inference service via ENDPOINT - PART 1

Lecture 142 LAB07B(option2)-Creating a batch inference service via ENDPOINT - PART 2

Lecture 143 LAB07B(option2)-Creating a batch inference service via ENDPOINT - PART 3

Section 52: Module 7- Versioning and Updating Deployments

Lecture 144 Deploy new model versions under the same endpoint

Lecture 145 Traffic splitting between deployments

Lecture 146 Clean-up old deployments

Lecture 147 Monitoring latency, throughput, failure rate

Lecture 148 Module 7-Recap

Section 53: Module 8 - Hyperparameters vs. Model Parameters

Lecture 149 Definitions_ ○ Model parameters ○ Hyperparameters

Lecture 150 Why tuning hyperparameters matters for model performance

Lecture 151 Examples of commonly tuned hyperparameters

Section 54: Module 8 - Azure ML Hyperparameter Tuning (HyperDrive / SweepJob)

Lecture 152 How Azure ML enables automatic tuning

Lecture 153 Searchstrategies_ ○ Grid, Random, Bayesian

Lecture 154 Early termination policies Bandit, Median Stopping

Lecture 155 Overview of tuning configuration

Section 55: Module 8 - Performing Hyperparameter Tuning in Azure ML

Lecture 156 LAB08A-Performing Hyperparameter tuning - PART 1

Lecture 157 LAB08A-Performing Hyperparameter tuning - PART 2

Lecture 158 LAB08A-Performing Hyperparameter tuning - PART 3

Section 56: Module 8 - Introduction to Automated Machine Learning (AutoML) Type

Lecture 159 What AutoML Does ?

Lecture 160 Supported tasks_ classification, regression, forecasting

Lecture 161 Keyfeatures_ Built-in explainability,Bestpractices built-in_ CV, class Balancing

Section 57: Module 8 - Running an AutoML Experiment in Azure ML

Lecture 162 LAB08B-Running an AutoML Expeirment via SDKv2 - PART 1

Lecture 163 LAB08B-Running an AutoML Expeirment via SDKv2

Section 58: Module 8 - Understanding AutoML Output & Explainability

Lecture 164 What's generated after an AutoML Run

Lecture 165 When to use HyperDrive vs AutoML

Lecture 166 Explore Outputs in Studio - Leaderboard & Feature Importance

Lecture 167 LAB08C - Understanding AutoML Output

Section 59: Module 8 - Responsible AI Features in AutoML

Lecture 168 AutoML includes Feature importance,charts,Datavalidation ,Leakagechecks, Balance

Lecture 169 How do we make these insights available

Section 60: Module 9 - Why Model Interpretability Matters ?

Lecture 170 Need for Transparency in ML

Lecture 171 Global vs Local Intepretability

Section 61: Module 9 - Model Explanation Techniques in Azure ML

Lecture 172 How Azure ML uses SHAP under the hood

Lecture 173 SHAP Supported Methods

Lecture 174 Explanation Types

Lecture 175 Works on AutoML and custom models

Section 62: Module 9 -Reviewing AutoML Explanations

Lecture 176 LAB09A - Reviewing AutoML Explanations - PART 1

Lecture 177 LAB09A - Reviewing AutoML Explanations - PART 2

Lecture 178 LAB09B - Interpreting Models and Tabular Explainer code

Section 63: Module 9 - Using the Explanation Client and SDK

Lecture 179 How to use Explanation Client

Section 64: Module 9 - Responsible AI & Fairness: What Azure ML Covers

Lecture 180 Azure ML includes some Bulit-in guardrails in Azure ML

Lecture 181 Explanation helps in identifying - Bias, Unintended proxies , Outlier-driven

Lecture 182 LAB09C - Interpreting Models with Responsible AI

Section 65: Module 10 - Why Monitor ML Models in Production?

Lecture 183 Monitoring Models - Common Failure Points

Lecture 184 Categories of Monitoring

Section 66: Module 10 - Overview of Monitoring Tools in Azure ML

Lecture 185 Built-in monitoring tools in azure

Lecture 186 When each is used and what they track

Section 67: Module 10 - Monitoring Model Services with Application Insights

Lecture 187 LAB10A-App Insights in Azure Cloud - PART 1

Lecture 188 LAB10A-App Insights in Azure Cloud - PART 2

Lecture 189 LAB10A-App Insights in Azure Cloud - PART 3

Lecture 190 LAB10B - App Insights in Azure ML Studio

Section 68: Module 10 - Logging Custom Metrics in score.py

Lecture 191 Explain how you can log - Predictions Processing Times & Confidence Score

Lecture 192 Via App Insights SDK or custom logging

Section 69: Module 10 - Monitoring Data Drift in Azure ML

Lecture 193 Actions on drift

Lecture 194 Actions on service failures

Lecture 195 LAB10C - Monitoring DataDrift in Azure ML Studio

Data Scientists Seeking to scale their machine learning workflows using Azure Machine Learning and automate model deployment.,Machine Learning Engineers Interested in operationalizing models using pipelines, endpoints, and Azure DevOps integration.,AI Engineers and Researchers Working with large-scale models (LLMs) and looking to apply prompt engineering, RAG, and fine-tuning in production.,MLOps Professionals Focused on implementing CI/CD pipelines, model versioning, and lifecycle management using Azure services.,Developers with a Data Focus Transitioning into AI/ML roles and looking to gain hands-on experience with real-world projects in the cloud.,Cloud Architects and Solution Engineers Wanting to design scalable and secure ML architectures using Azure services and tools.,IT Professionals Preparing for the Microsoft DP-100 Certification Aiming to validate their skills in designing and implementing data science solutions on Azure.,University Students and Bootcamp Graduates With basic ML and Python knowledge, looking to build portfolio-ready projects and gain practical industry exposure.