Modern Reinforcement Learning: Deep Q Learning In Pytorch
Last updated 10/2020
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 2.14 GB | Duration: 5h 42m
Last updated 10/2020
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 2.14 GB | Duration: 5h 42m
How to Turn Deep Reinforcement Learning Research Papers Into Agents That Beat Classic Atari Games
What you'll learn
How to read and implement deep reinforcement learning papers
How to code Deep Q learning agents
How to Code Double Deep Q Learning Agents
How to Code Dueling Deep Q and Dueling Double Deep Q Learning Agents
How to write modular and extensible deep reinforcement learning software
How to automate hyperparameter tuning with command line arguments
Requirements
Some College Calculus
Exposure To Deep Learning
Comfortable with Python
Description
In this complete deep reinforcement learning course you will learn a repeatable framework for reading and implementing deep reinforcement learning research papers. You will read the original papers that introduced the Deep Q learning, Double Deep Q learning, and Dueling Deep Q learning algorithms. You will then learn how to implement these in pythonic and concise PyTorch code, that can be extended to include any future deep Q learning algorithms. These algorithms will be used to solve a variety of environments from the Open AI gym's Atari library, including Pong, Breakout, and Bankheist. You will learn the key to making these Deep Q Learning algorithms work, which is how to modify the Open AI Gym's Atari library to meet the specifications of the original Deep Q Learning papers. You will learn how to:Repeat actions to reduce computational overheadRescale the Atari screen images to increase efficiencyStack frames to give the Deep Q agent a sense of motionEvaluate the Deep Q agent's performance with random no-ops to deal with model over trainingClip rewards to enable the Deep Q learning agent to generalize across Atari games with different score scalesIf you do not have prior experience in reinforcement or deep reinforcement learning, that's no problem. Included in the course is a complete and concise course on the fundamentals of reinforcement learning. The introductory course in reinforcement learning will be taught in the context of solving the Frozen Lake environment from the Open AI Gym. We will cover:Markov decision processesTemporal difference learningThe original Q learning algorithmHow to solve the Bellman equationValue functions and action value functionsModel free vs. model based reinforcement learningSolutions to the explore-exploit dilemma, including optimistic initial values and epsilon-greedy action selectionAlso included is a mini course in deep learning using the PyTorch framework. This is geared for students who are familiar with the basic concepts of deep learning, but not the specifics, or those who are comfortable with deep learning in another framework, such as Tensorflow or Keras. You will learn how to code a deep neural network in Pytorch as well as how convolutional neural networks function. This will be put to use in implementing a naive Deep Q learning agent to solve the Cartpole problem from the Open AI gym.
Overview
Section 1: Introduction
Lecture 1 What You Will Learn In This Course
Lecture 2 Required Background, software, and hardware
Lecture 3 How to Succeed in this Course
Section 2: Fundamentals of Reinforcement Learning
Lecture 4 Agents, Environments, and Actions
Lecture 5 Markov Decision Processes
Lecture 6 Value Functions, Action Value Functions, and the Bellman Equation
Lecture 7 Model Free vs. Model Based Learning
Lecture 8 The Explore-Exploit Dilemma
Lecture 9 Temporal Difference Learning
Section 3: Deep Learning Crash Course
Lecture 10 Dealing with Continuous State Spaces with Deep Neural Networks
Lecture 11 Naive Deep Q Learning in Code: Step 1 - Coding the Deep Q Network
Lecture 12 Naive Deep Q Learning in Code: Step 2 - Coding the Agent Class
Lecture 13 Naive Deep Q Learning in Code: Step 3 - Coding the Main Loop and Learning
Lecture 14 Naive Deep Q Learning in Code: Step 4 - Verifying the Functionality of Our Code
Lecture 15 Naive Deep Q Learning in Code: Step 5 - Analyzing Our Agent's Performance
Lecture 16 Dealing with Screen Images with Convolutional Neural Networks
Section 4: Human Level Control Through Deep Reinforcement Learning: From Paper to Code
Lecture 17 How to Read Deep Learning Papers
Lecture 18 Analyzing the Paper
Lecture 19 How to Modify the OpenAI Gym Atari Environments
Lecture 20 How to Preprocess the OpenAI Gym Atari Screen Images
Lecture 21 How to Stack the Preprocessed Atari Screen Images
Lecture 22 How to Combine All the Changes
Lecture 23 How to Add Reward Clipping, Fire First, and No Ops
Lecture 24 How to Code the Agent's Memory
Lecture 25 How to Code the Deep Q Network
Lecture 26 Coding the Deep Q Agent: Step 1 - Coding the Constructor
Lecture 27 Coding the Deep Q Agent: Step 2 - Epsilon-Greedy Action Selection
Lecture 28 Coding the Deep Q Agent: Step 3 - Memory, Model Saving and Network Copying
Lecture 29 Coding the Deep Q Agent: Step 4 - The Agent's Learn Function
Lecture 30 Coding the Deep Q Agent: Step 5 - The Main Loop and Analyzing the Performance
Section 5: Deep Reinforcement Learning with Double Q Learning
Lecture 31 Analyzing the Paper
Lecture 32 Coding the Double Q Learning Agent and Analyzing Performance
Section 6: Dueling Network Architectures for Deep Reinforcement Learning
Lecture 33 Analyzing the Paper
Lecture 34 Coding the Dueling Deep Q Network
Lecture 35 Coding the Dueling Deep Q Learning Agent and Analyzing Performance
Lecture 36 Coding the Dueling Double Deep Q Learning Agent and Analyzing Performance
Section 7: Improving On Our Solutions
Lecture 37 Implementing a Command Line Interface for Rapid Model Testing
Lecture 38 Consolidating Our Code Base for Maximum Extensability
Lecture 39 How to Test Our Agent and Watch it Play the Game in Real Time
Section 8: Conclusion
Lecture 40 Summarizing What We've Learned
Section 9: Bonus Lecture
Lecture 41 Bonus Video: Where to Go From Here
Python developers eager to learn about cutting edge deep reinforcement learning