Apache Storm Introduction – API and Topology

Individuals interested in how to perform an installation and configure it, as well as design and create basic Storm topologies


Expected Duration
185 minutes

Storm makes it easy to stream massive amounts of unbounded streams of data while providing a fault-tolerant system. This system accompanied with other integrations can take any business to the next level. In this course, you will be introduced to Trident and its relationship to Storm, along with its integration. Many other integrations will be explored including Hadoop, Kafka, JMX, Ganglia, and even automation using Puppet as well as monitoring and analytics tools. In this course, you will also learn how to deploy the Storm architecture.


Further Exploring Trident

  • start the course
  • use Trident for a simple topology
  • describe topology state management with Trident
  • describe the different types of Trident spouts available for implementing fault-tolerant Trident state management
  • describe the different Trident State APIs available for implementing fault-tolerant Trident state management
  • Integrating Trident with Storm

  • describe distributed RPC model and how it is used with Apache Storm
  • describe DRPC modes of operation and topology types
  • deploy a Trident topology to a Storm cluster
  • Monitoring Processes of a Storm Cluster

  • describe the Storm UI home page
  • launch a Storm topology to a local cluster and view cluster activity in the Storm UI
  • analyze a Storm topology using the Storm UI
  • describe the process of using the Nimbus Thrift client for obtaining Storm cluster metrics
  • set up a Maven project in Eclipse IDE that can be used to write Java client code for connecting to a Nimbus Thrift server
  • write Java client code that connects to a Nimbus Thrift server and retrieves Storm cluster statistics
  • Integrating Kafka with Storm

  • describe the general architecture of Apache Kafka
  • describe Kafka components and data model
  • produce and consume a Kafka topic
  • consume Kafka messages in a Storm topology
  • Defining Apache Storm Integration Options

  • describe some options for using Storm’s Core APIs to implement micro-batching in a Storm Core topology
  • describe Apache Hadoop’s use with Storm
  • download and install Apache Hadoop on a development machine
  • describe how Apache Storm applications can be run on Hadoop YARN clusters to leverage YARN resource management
  • describe the Puppet architecture and some key framework components
  • describe how JMX and Ganglia can be integrated and used to monitor Storm clusters
  • describe how HBase and Redis can be integrated and used as datastores with Apache Storm
  • integrate and use JMX in Storm to obtain Storm Nimbus and Supervisor metrics
  • Practice: Integrating Apache Storm

  • demonstrate increased knowledge of configuring and installing Apache Storm




    Multi-license discounts available for Annual and Monthly subscriptions.