Today, we’re surrounded by data. People upload videos, take pictures on their cell phones, text friends, update their Facebook status, leave comments around the web, click on ads, and so forth. Machines, too, are generating and keeping more and more data. To process such large datasets, there is a need for specialized tools.
This course covers two important frameworks Hadoop and Spark, which provide some of the most important tools to carry out enormous big data tasks.The first module of the course will start with the introduction to Big data and soon will advance into big data ecosystem tools and technologies like HDFS, YARN, MapReduce, Hive, etc.
In the second module, the course will take you through an introduction to spark and then dive into Scala and Spark concepts like RDD, transformations, actions, persistence and deploying Spark applications. The course also covers Spark Streaming and Kafka, various data formats like JSON, XML, Avro, Parquet and Protocol Buffers.