Build A Secure Data Lake In Aws Using Aws Lake Formation

Posted By: ELK1nG

Build A Secure Data Lake In Aws Using Aws Lake Formation
Last updated 2/2022
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 1.50 GB | Duration: 3h 18m

Step by step guide for setting up a data lake in AWS using Lake formation, Glue, DataBrew, Athena, Redshift, Macie etc.

What you'll learn
How to quickly setup a data lake in AWS using AWS Lake formation
You will learn to build real-world data pipeline using AWS glue studio and ingest data from sources such as RDS, Kinesis Firehose and DynamoDB
You will learn how to transform data using AWS Glue Studio and AWS Glue DataBrew
You will acquire good data engineering skills in AWS using AWS lake formation, Glue Studio and, blueprints and workflows in lake formation
Requirements
Basic undestanding of cloud computing
Basic understanding of what a data lake and data warehouse are is essential but not required
An active AWS account is required to be able to follow along
Description
In this course, we will be creating a data lake using AWS Lake Formation and bring data warehouse capabilites to the data lake to form the lakehouse architecture using Amazon Redshift. Using Lake Formation, we also collect and catalog data from different data sources, move the data into our S3 data lake, and then clean and classify them.The course will follow a logical progression of a real world project implementation with hands on experience of setting up  a data lake,  creating data pipelines  for ingestion and transforming your data in preparation for analytics and reporting.Chapter 1Setup the data lake using lake formationCreate different data sources (MySQL RDS and Kinesis)Ingest data from the MYSQL RDS data source into the data lake by setting up blueprint and workflow jobs in lake formationCatalog our Database using crawlersUse governed tables for managing access control and securityQuery our data lake using AthenaChapter 2,Explore the use of AWS Gluw DataBrew for profiling and understanding our data before we starting performing complex ETL jobs.Create Recipes for manipulating the data in our data lake using different transformationsClean and normalise dataRun jobs to apply the recipes on all new data or larger datasetsChapter 3Introduce Glue StudioAuthor and monitor ETL jobs for tranforming our data and moving them  between different zone of our data lakeCreate a DynamoDB source and ingest data into our data lake using AWS GlueChapter 4Introduce and create a redshift cluster to bring datawarehouse capabilities to our data lake to form the lakehouse architectureCreate ETL jobs for moving data from our lake into the warehouse for analyticsUse redshift spectrum to query against data in our S3 data lake without the need for duplicating data or infrastructureChapter 5Introduce Amazon Macie for managing data security and data privacy and ensure we can continue to identify sensitive data at scale as our data lake grows

Overview

Section 1: Introduction

Lecture 1 Introduction to the course

Section 2: Setting up the Data Lake with AWS Lake formation

Lecture 2 Configuring S3 lake formation

Lecture 3 Simple file ingestion into the data lake

Lecture 4 Use blueprints and workflows in Lake formation for ingesting data from MySQL RDS

Lecture 5 Ingest real-time data using Kinesis firehose into the data lake

Lecture 6 Security and governance of our data lake with governed tables

Section 3: Preparation and analysis of data in our data lake using AWS Glue DataBrew

Lecture 7 Introduction to AWS Glue DataBrew

Lecture 8 Analysis and transformation of data in our data lake with Glue DataBrew

Lecture 9 Create DataBrew recipes and applying them to a larger datasets

Section 4: Author, run and monitor ETL jobs using AWS Glue Studio

Lecture 10 Introduction to AWS Glue Studio

Lecture 11 Author ETL jobs for moving data between the different zones in our data lake

Lecture 12 Ingest data from DynamoDB into the data lake using AWS Glue and catalog it

Section 5: Prepare our data for analytics and reporting

Lecture 13 Introduction to Amazon Redshift and setting up our Amazon Redshfit cluster

Lecture 14 Author ETL job for moving data from our data lake into the Redshift warehouse

Lecture 15 Using Redshift Spectrum for querying data located in our data lake

Section 6: Bonus

Lecture 16 Introduction to Amazon Macie for managing data security and privacy in our lake

Data Architects looking to architect data integration solutions in AWS cloud,Data Engineers,Anyone looking to start a career as an AWS Data Engineer,Data Scientist, Data Analyts and Database Administrators,IT professionals looking to move into the Data Engineering space