Introduction to Hadoop Administration (TTCHADADM3)

Administrators who need to maintain a Hadoop cluster and its related components in a Linux environment

Please contact us for information about prerequisites.

Expected Duration
3 day


Apache Hadoop is an open source framework for creating reliable and distributable compute clusters. Credited with the IBM Watson Jeopardy win in 2011, Hadoop can be used (with other related frameworks) to process large unstructured or semi-structured data sets from multiple sources to dissect, classify, learn from, and make suggestions for business analytics, decision support, and other advanced forms of machine intelligence.˜

This introductory-level, hands-on lab-intensive course is geared for the administrator who is new to Hadoop and responsible for maintaining a Hadoop cluster and its related components. Hadoop is a system designed for massive scalability; its extremely fault-tolerant compared to other cluster architectures. As administrators, you will need to install, configure, and maintain Hadoop on Linux in various compute environments.˜

This course agenda may be easily customized for addressing areas of specific interest to your team. There are lab variations that support Cloudera and Hortonworks distributions as well.


1. Hadoop Overview

  • Map/Reduce
  • Hadoop, YARN, and Spark
  • Mahout and MLib
  • Alternate Frameworks

2. Hadoop Architecture

  • Hadoop Map/Reduce
  • YARN
  • HDFS
  • Spark
  • Cassandra
  • HBase
  • Hive
  • Pig

3. Installing Hadoop

  • Linux Considerations
  • SSH Configuration
  • Hadoop Installation
  • OS Security
  • NamedNodes
  • Job Trackers

4. Test-Running Hadoop Programs

  • Simple MapReduce Test
  • Spark Test
  • Pig Test

5. Cloud Installations

  • Amazon EC2
  • Amazon Elastic MapReduce
  • Rackspace
  • Installing with Docker

6. Optimization and Tuning

  • Performance Metrics
  • Node Sizing
  • Kernel Tuning

7. Installing HBase

  • HBase Installation
  • ZooKeeper

8. Previewing Hadoop 3