Building Machine Learning Algorithms on Apache Spark: Scaling Out and Up

Published

June 5, 2018

Presented at Spark Summit  (San Francisco, CA)

There are lots of reasons why you might want to implement your own machine learning algorithms on Spark: you might want to experiment with a new idea, try and reproduce results from a recent research paper, or simply to use an existing technique that isn’t implemented in MLlib.

In this talk, we’ll walk through the process of developing a new machine learning algorithm for Spark. We’ll start with the basics, by considering how we’d design a scale-out parallel implementation of our unsupervised learning technique. The bulk of the talk will focus on the details you need to know to turn an algorithm design into an efficient parallel implementation on Spark.

We’ll start by reviewing a simple RDD-based implementation, show some improvements, point out some pitfalls to avoid, and iteratively extend our implementation to support contemporary Spark features like ML Pipelines and structured query processing. We’ll conclude by briefly examining some useful techniques to complement scale-out performance by scaling our code up, taking advantage of specialized hardware to accelerate single-worker performance.

You’ll leave this talk with everything you need to build a new machine learning technique that runs on Spark.

Talk video