Apache Druid Complete Guide
Genre: eLearning | MP4 | Video: h264, 1280x720 | Audio: AAC, 48.0 KHz
Language: English | Size: 856 MB | Duration: 21 lectures • 2h 0m
Genre: eLearning | MP4 | Video: h264, 1280x720 | Audio: AAC, 48.0 KHz
Language: English | Size: 856 MB | Duration: 21 lectures • 2h 0m
Learn Druid Architecture, Kafka Ingestion, Schema Evolution, Tuning and Druid Hive Integration with Twitter example
What you'll learn
In depth knowledge on Druid Components and it's Architecture
Realtime data ingestion from Apache Kafka using Twitter Producer application
Tuning Apache Druid for better throughput
Accessing Apache Druid Tables through Avatica JDBC driver
Learning Schema Evolution
Complete Druid Hive Integration with hands-on experience
Requirements
Basics of Apache Kafka, Apache Hive
Practical experience on Mysql, AWS
Description
What do you learn from this course ?
In this course, we learn end-to-end apache druid salient features and integration with Apache Hive.
We start this course by gaining theoretical knowledge on Druid and its key features.
Next, we jump to practical part where we install druid locally and walk you through its user portal.
We change the druid metadata storage to Mysql and deep storage to S3 for enhancing the druid setup.
After that, we write our own Twitter Producer app which pulls the tweets from twitter in realtime and push the tweets to apache Kafka.
We create a Kafka ingestion task on Druid which pull tweets from Kafka and store it into Apache Druid.
Also, we learn how to apply transformation, filter, schema configuration during Kafka ingestion process.
Keeping practical knowledge in mind we jump to the theory part and dig deeper into the druid internal working principal.
We learn, how the data is distributed between the data nodes and retrieved in realtime.
Next, we tune our ingestion pipeline to gain better result. Lastly, we explore salient features like Accessing Druid through JDBC and Schema Evolution.
In the 2nd module, we talk about druid hive integration.
At first, we learn what is this integration ? Next, we provision VM from AWS and install apache druid on it.
After that, we acquire a hive EMR cluster from AWS and configure it such that it can communicate to druid easily.
Lastly, we run the same druid queries on hive and learn how the computation is pushdown to druid for better performance.
Overall, this course is composite of theory and practical sessions. Throughout this course we use latest druid and hive version. At the end of this course, you will be excel on apache druid.
Who this course is for
Data Engineers
Software Engineers