This video is a comprehensive tutorial to help you learn all the fundamentals of Apache Spark, one of the trending big data processing frameworks on the market today. We will introduce you to the various components of the Spark framework to efficiently process, analyze, and visualize data. You will also get the brief introduction of Apache Hadoop and Scala programming language before start writing with Spark programming. You will learn about the Apache Spark programming fundamentals such as Resilient Distributed Datasets (RDD) and see which operations can be used to perform a transformation or action operation on the RDD. We'll show you how to load and save data from various data sources as different type of files, No-SQL and RDBMS databases etc. We'll also explain Spark advanced programming concepts such as managing Key-Value pairs, accumulators etc. Finally, you'll discover how to create an effective Spark application and execute it on Hadoop cluster to the data and gain insights to make informed business decisions. By the end of this video, you will be well-versed with all the fundamentals of Apache Spark and implementing them in Spark. Style and Approach: Filled with examples, this course will help you learn Apache Spark Fundamentals and get started with the Apache Spark. You will learn to build Spark applications and also execution of Spark execution on Hadoop cluster.
This course is for data scientists, big data technology developers and analysts who want to learn the fundamentals of Apache Spark from a single, comprehensive source, instead of spending countless hours on the internet trying to take bits and pieces from different sources. Some familiarity with Scala would be helpful.