Neural Networks With Tensorflow And Pytorch
Last updated 3/2019
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 5.90 GB | Duration: 13h 1m
Last updated 3/2019
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 5.90 GB | Duration: 13h 1m
Unleash the power of TensorFlow and PyTorch to build and train Neural Networks effectively
What you'll learn
Get hands-on and understand Neural Networks with TensorFlow and PyTorch
Understand how and when to apply autoencoders
Develop an autonomous agent in an Atari environment with OpenAI Gym
Apply NLP and sentiment analysis to your data
Develop a multilayer perceptron neural network to predict fraud and hospital patient readmission
Build convolutional neural network classifier to automatically identify a photograph
Learn how to build a recurrent neural network to forecast time series and stock market data
Know how to build Long Short Term Memory Model (LSTM) model to classify movie reviews as positive or negative using Natural Language Processing (NLP)
Get familiar with PyTorch fundamentals and code a deep neural network
Perform image captioning and grammar parsing using Natural Language Processing
Requirements
Basic knowledge of Python is required. Familiarity with TensorFlow and PyTorch will be beneficial.
Description
TensorFlow is quickly becoming the technology of choice for deep learning and machine learning, because of its ease to develop powerful neural networks and intelligent machine learning applications. Like TensorFlow, PyTorch has a clean and simple API, which makes building neural networks faster and easier. It's also modular, and that makes debugging your code a breeze. If you’re someone who wants to get hands-on with Deep Learning by building and training Neural Networks, then go for this course.This course takes a step-by-step approach where every topic is explicated with the help of a real-world examples. You will begin with learning some of the Deep Learning algorithms with TensorFlow such as Convolutional Neural Networks and Deep Reinforcement Learning algorithms such as Deep Q Networks and Asynchronous Advantage Actor-Critic. You will then explore Deep Reinforcement Learning algorithms in-depth with real-world datasets to get a hands-on understanding of neural network programming and Autoencoder applications. You will also predict business decisions with NLP wherein you will learn how to program a machine to identify a human face, predict stock market prices, and process text as part of Natural Language Processing (NLP). Next, you will explore the imperative side of PyTorch for dynamic neural network programming. Finally, you will build two mini-projects, first focusing on applying dynamic neural networks to image recognition and second NLP-oriented problems (grammar parsing).By the end of this course, you will have a complete understanding of the essential ML libraries TensorFlow and PyTorch for developing and training neural networks of varying complexities, without any hassle.Meet Your Expert(s):We have the best work of the following esteemed author(s) to ensure that your learning journey is smooth:Roland Meertens is currently developing computer vision algorithms for self-driving cars. Previously he has worked as a research engineer at a translation department. Examples of things he has made are a Neural Machine Translation implementation, a post-editor, and a tool that estimates the quality of a translated sentence. Last year, he worked at the Micro Aerial Vehicle Laboratory at the university of Delft, on indoor localization (SLAM) and obstacle avoidance behaviors for a drone that delivers food inside a restaurant. Another thing he worked on was detecting and following people using onboard computer vision algorithms on a stereo camera. For his Master's thesis, he did an internship at a company called SpirOps, where he worked on the development of a dialogue manager for project Romeo. In his Artificial Intelligence study, he specialized in cognitive artificial intelligence and brain-computer interfacing.Harveen Singh Chadha is an experienced researcher in Deep Learning and is currently working as a Self Driving Car Engineer. He is currently focused on creating an ADAS (Advanced Driver Assistance Systems) platform. His passion is to help people who currently want to enter into the Data Science Universe.Anastasia Yanina is a Senior Data Scientist with around 5 years of experience. She is an expert in Deep Learning and Natural Language processing and constantly develops her skills as far as possible. She is passionate about human-to-machine interactions. She believes that bridging the gap may become possible with deep neural network architectures.
Overview
Section 1: Learning Neural Networks with Tensorflow
Lecture 1 The Course Overview
Lecture 2 Solving Public Datasets
Lecture 3 Why We Use Docker and Installation Instructions
Lecture 4 Our Code, in a Jupyter Notebook
Lecture 5 Understanding TensorFlow
Lecture 6 The Iris Dataset
Lecture 7 The Human Brain and How to Formalize It
Lecture 8 Backpropagation
Lecture 9 Overfitting — Why We Split Our Train and Test Data
Lecture 10 Ground State Energies of 16,242 Molecules
Lecture 11 First Approach – Easy Layer Building
Lecture 12 Preprocessing Data
Lecture 13 Understanding the Activation Function
Lecture 14 The Importance of Hyperparameters
Lecture 15 Images of Written Digits
Lecture 16 Dense Layer Approach
Lecture 17 Convolution and Pooling Layers
Lecture 18 Convolution and Pooling Layers (Continued)
Lecture 19 From Activations to Probabilities – the Softmax Function
Lecture 20 Optimization and Loss Functions
Lecture 21 Large-Scale CelebFaces Attributes (CelebA) Dataset
Lecture 22 Building an Input Pipeline in TensorFlow
Lecture 23 Building a Convolutional Neural Network
Lecture 24 Batch Normalization
Lecture 25 Understanding What Your Network Learned –Visualizing Activations
Section 2: Advanced Neural Networks with Tensorflow
Lecture 26 The Course Overview
Lecture 27 The Approach of This Course
Lecture 28 Installing Docker and Downloading the Source Code for This Course
Lecture 29 Understanding Jupyter Notebooks and TensorFlow
Lecture 30 Visualizing Your Graph
Lecture 31 Adding Summaries
Lecture 32 Plotting the Weights in a Histogram
Lecture 33 Inspecting Input and Output
Lecture 34 Encoding MNIST Characters
Lecture 35 Practical Application –Denoising
Lecture 36 The Dropout Layer
Lecture 37 Variational Autoencoders
Lecture 38 The Omniglot Dataset
Lecture 39 What Is a Siamese Neural Network?
Lecture 40 Training and Testing a Siamese Neural Network
Lecture 41 Alternative Loss Functions
Lecture 42 Speed of Your Network
Lecture 43 Getting Started with the OpenAI Gym
Lecture 44 Random Search
Lecture 45 Reinforcement Learning Explained
Lecture 46 Reinforcement Learning Explained (Continued)
Lecture 47 Reinforcement Learning Tricks
Lecture 48 Playing Atari Games
Lecture 49 Defining Our Network
Lecture 50 Starting and Training a Session
Section 3: Hands-On Neural Network Programming with TensorFlow
Lecture 51 The Course Overview
Lecture 52 Introduction To Neural Networks
Lecture 53 Setting Up Environment
Lecture 54 Introduction To TensorFlow
Lecture 55 TensorFlow Installation
Lecture 56 Multilayer Perceptron Neural Network
Lecture 57 Forward Propagation & Loss Functions
Lecture 58 Backpropagation
Lecture 59 Creating First Neural Network to Predict Fraud
Lecture 60 Testing Neural Network to Predict Fraud
Lecture 61 Introduction To Convolutional Neural Networks
Lecture 62 Training a Convolution Neural Network
Lecture 63 Testing a Convolution Neural Network
Lecture 64 Introduction To Recurrent Neural Networks
Lecture 65 Training a Recurrent Neural Network
Lecture 66 Testing a Recurrent Neural Network
Lecture 67 Introduction To Long Short-Term Memory Network
Lecture 68 Training an LSTM Network
Lecture 69 Testing a Long Short-Term Memory Network
Lecture 70 Introduction To Generative models
Lecture 71 Neural Style Transfer: Basics
Lecture 72 Results: Neural Style Transfer
Lecture 73 Introduction To Autoencoders
Lecture 74 Autoencoder in TensorFlow
Lecture 75 Training & Testing a Autoencoder
Section 4: Dynamic Neural Network Programming with PyTorch
Lecture 76 The Course Overview
Lecture 77 Installation Checklist
Lecture 78 Tensors, Autograd, and Backprop
Lecture 79 Backprop, Loss Functions, and Neural Networks
Lecture 80 PyTorch on GPU: First Steps
Lecture 81 Imperative Programming Architectures
Lecture 82 Static Graphs versus Dynamic Graphs
Lecture 83 Neural Network Debugging: Why Imperative Philosophy Helps
Lecture 84 Feedforward and Recurrent Neural Networks
Lecture 85 Convolutional Neural Networks
Lecture 86 Autoencoders
Lecture 87 Extensions with Numpy – Part 1
Lecture 88 Extensions with Numpy – Part 2
Lecture 89 Custom C++ and CUDA Extensions: Motivation
Lecture 90 Custom C++ and CUDA Extensions: Setuptools
Lecture 91 Custom C++ and CUDA Extensions: Binding to Python
Lecture 92 Custom C++ and CUDA Extensions: JIT Compilation
Lecture 93 Image Captioning: First Steps
Lecture 94 PyTorch DataLoaders
Lecture 95 Image Captioning: Theory
Lecture 96 Image Captioning: Practice
Lecture 97 Honor Track: Image Captioning Datasets
Lecture 98 Motivation and Section Overview
Lecture 99 Word Embeddings
Lecture 100 Sentiment Analysis with PyTorch
Lecture 101 Char-Level RNN for Text Generation
This course is for machine learning developers, engineers, and data science professionals who want to work with neural networks and deep learning using powerful Python libraries, TensorFlow and PyTorch.