Part I: Programming Fundamentals of High Performance Distributed Computing
Introduction
Getting Started with Hadoop
Getting Started with Spark
Programming Internals of Scalding and Spark
Part II: Case studies using Hadoop, Scalding and Spark
Case Study I: Data Clustering using Scalding and Spark
Case Study II: Data Classification using Scalding and Spark
Case Study III: Regression Analysis using Scalding and Spark
Case Study IV: Recommender System using Scalding and Spark
This timely text/reference describes the development and implementation of large-scale distributed processing systems using open source tools and technologies. Comprehensive in scope, the book presents state-of-the-art material on building high performance distributed computing systems, providing practical guidance and best practices as well as describing theoretical software frameworks. Features: describes the fundamentals of building scalable software systems for large-scale data processing in the new paradigm of high performance distributed computing; presents an overview of the Hadoop ecosystem, followed by step-by-step instruction on its installation, programming and execution; Reviews the basics of Spark, including resilient distributed datasets, and examines Hadoop streaming and working with Scalding; Provides detailed case studies on approaches to clustering, data classification and regression analysis; Explains the process of creating a working recommender system using Scalding and Spark.