I’m delighted to have a chance to present at HTCondor Week this year and am looking forward to seeing some old friends and collaborators. The thesis of my talk is that HTCondor users who aren’t already leading data science initiatives are well-equipped to start doing so. The talk is brief and high-level, so here are a few quick links to learn more if you’re interested:
- Contemporary data processing frameworks like Apache Spark and Apache Flink offer superior programmability, flexibility, and performance. Both projects have really excellent documentation and vibrant user communities.
- I’ve written regularly about Spark in particular but the best place to start here is probably my ApacheCon EU ‘14 talk on Spark performance, which both introduces Spark and shows how to use its fundamental abstractions idiomatically and efficiently.
I also gave a quick overview of some of my team’s recent data science projects; visit these links to learn more:
- Diagnosing open-source community health with Spark by William Benton,
- Insights into Customer Behavior from Clickstream Data by RJ Nowling (also see the video),
- Using a Relative Index of Performance (RIP) to Determine Optimum Configuration Settings Compared to Random Forest Assessment Using Spark by Diane Feddema,
- Random Forest Clustering with Apache Spark by Erik Erlandson (see also Erik’s blog post), and
- Analyzing endurance-sports activity data with Spark by William Benton.
My team and I are pleased to announce the latest release of our Silex library, featuring cool new functionality from all of the core contributors. Silex is a library of reusable components for Apache Spark factored out of our data science work in Red Hat’s Emerging Technology group. You can:
- Include Silex in your projects,
- fork Silex on GitHub,
- read the API docs, or
- see what’s new in the project.
Enjoy, and let us know how you’re finding it useful!
As I mentioned earlier, I’ll be talking about feature engineering and outlier detection for infrastructure log data at Apache: Big Data next week. Consider this post a virtual handout for that talk. (I’ll also be presenting another talk on scalable log data analysis later this summer. That talk is also inspired by my recent work with logs but will focus on different parts of the problem, so stay tuned if you’re interested in the domain!)
Some general links:
- You can download a PDF of my slide deck. I recognize that people often want to download slides, although I’d prefer you look at the rest of this post instead since my slides are not intended to stand alone without my presentation.
- Check out my team’s Silex library, which is intended to extend the standard Spark library with high-quality, reusable components for real-world data science. The most recent release includes the self-organizing map implementation I mentioned in my talk.
- Watch this short video presentation showing some of the feature engineering and dimensionality-reduction techniques I discussed in the talk.
The following blog posts provide a deeper dive into some of the topics I covered in the talk:
- When I started using Spark and ElasticSearch, the upstream documentation was pretty sparse (it was especially confusing because it required some unidiomatic configuration steps). So I wrote up my experiences getting things working. This is an older post but may still be helpful.
- If you’re interested in applying natural-language techniques to log data, you should consider your preprocessing pipeline. Here are the choices I made when I was evaluating
word2vecon log messages.
- Here’s a brief (and not-overly technical) overview of self-organizing maps, including static visual explanations and an animated demo.
If you’ll be at Apache: Big Data next week, you should definitely check out some talks from my teammates in Red Hat’s Emerging Technology group and our colleague Suneel Marthi from the CTO office:
- Random Forest Clustering with Apache Spark by Erik Erlandson,
- Using a Relative Index of Performance (RIP) to Determine Optimum Configuration Settings Compared to Random Forest Assessment Using Spark by Diane Feddema,
- Distributed Machine Learning with Apache Mahout by Suneel Marthi, and
- Data Science for the Datacenter: Analyzing Logs with Apache Spark by William Benton.
Unfortunately, my talk is at the same time as Suneel’s, so I won’t be able to attend his, but these are all great talks and you should be sure to put as many as possible on your schedule if you’ll be in Vancouver!
Self-organizing maps are a useful technique for identifying structure in high-dimensional data sets. The map itself is a low-dimensional arrangement of cells, where each cell is an object comparable to the objects in the training set. The goal of self-organizing map training is to arrange a grid of cells so that nearby cells will be the best matches for similar objects. Once we’ve built up the map, we can identify clusters of similar objects (based on the cells that they map to) and even detect outliers (based on the distributions of map quality).
Here are a few snapshots of the training process on color data, which I developed as a test for a parallel implementation of self-organizing maps in Apache Spark. For this demo, I used angular similarity in the RGB color space (not Euclidean distance) as a measure of color similarity. This means that, for example, a darker color would be considered similar to a lighter color with a similar hue.
We start with a random map:
Matches made in the first training iteration essentially affect the whole map, producing a blurred, unsaturated, undifferentiated map:
Some structure begins to emerge pretty rapidly, though; after one quarter of our training iterations, we can already see clear clusters of colors:
The map begins to get more and more saturated as similar colors are grouped together. Here’s what it looks like after half of the training iterations:
…and three-quarters of the training iterations:
As training proceeds, it gradually affects smaller and smaller neighborhoods of the map until the very end, when each training match only affects a single cell (and thus the impact of darker colors becomes apparent, since they can cluster together in single cells that are not the best matching unit for any brighter colors):
In a future post, I’ll cover the training algorithm, introduce the code, and provide some tips for implementing similar techniques in Spark. For now, though, here is a demo video that shows an animation of the whole map training process:
Lately, I’ve been experimenting with Spark’s implementation of word2vec. Since most of the natural-language data I have sitting around these days are service and system logs from machines at work, I thought it would be fun to see how well word2vec worked if we trained it on the text of log messages. This is obviously pretty far from an ideal training corpus, but these brief, rich messages seem like they should have some minable content. In the rest of this post, I’ll show some interesting results from the model and also describe some concrete preprocessing steps to get more useful results for extracting words from the odd dialect of natural language that appears in log messages.
word2vec is a family of techniques for encoding words as relatively low-dimensional vectors that capture interesting semantic information. That is, words that are synonyms are likely to have vectors that are similar (by cosine similarity). Another really neat aspect of this encoding is that linear transformations of these vectors can expose semantic information like analogies: for example, given a model trained on news articles, adding the vectors for “Madrid” and “France” and subtracting the vector for “Spain” results in a vector very close to that for “Paris.”
Spark’s implementation of word2vec uses skip-grams, so the training objective is to produce a model that, given a word, predicts the context in which it is likely to appear.
Like the original implementation of word2vec, Spark’s implementation uses a window of ±5 surrounding words (this is not user-configurable) and defaults to discarding all words that appear fewer than 5 times (this threshold is user-configurable). Both of these assumptions seem sane for the sort of training “sentences” that appear in log messages, but they won’t be sufficient.
Spark doesn’t provide a lot of tools for tokenizing and preprocessing natural-language text.1 Simple string splitting is as ubiquitous in trivial language processing examples just as it is in trivial word count examples, but it’s not going to give us the best results. Fortunately, there are some minimal steps we can take to start getting useful tokens out of log messages. We’ll look at these steps and what see what motivates them now.
What is a word?
Let’s first consider what kinds of tokens might be interesting for analyzing the content of log messages. At the very least, we might care about:
- dictionary words,
- trademarks (which may or may not be dictionary words),
- technical jargon terms (which may or may not be dictionary words),
- service names (which may or may not be dictionary words),
- symbolic constant names (e.g.,
- pathnames (e.g.,
- programming-language identifiers (e.g.,
For this application, we’re less interested in the following kinds of tokens, although it is possible to imagine other applications in which they might be important:
- IPv4 and IPv6 addresses,
- MAC addresses,
- dates and times, and
- hex hash digests.
If we’re going to convert sequences of lines to sequences of sequences of tokens, we’ll eventually be splitting strings. Before we split, we’ll collapse all runs of whitespace into single spaces so that we get more useful results when we do split. This isn’t strictly necessary — we could elect to split on runs of whitespace instead of single whitespace characters, or we could filter out empty strings from word sequences before training on them. But this makes for cleaner input and it makes the subsequent transformations a little simpler.
Here’s Scala code to collapse runs of whitespace into a single space:
The next thing we’ll want to do is eliminate all punctuation from the ends of each word. An appropriate definition of “punctuation” will depend on the sorts of tokens we wind up deciding are interesting, but I considered punctuation characters to be anything except:
- alphanumeric characters,
- dashes, and
Whether or not we want to retain intratoken punctuation depends on the application; there are good arguments to be made for retaining colons and periods (MAC addresses, programming-language identifiers in stack traces, hostnames, etc.), slashes (paths), at-signs (email addresses), and other marks as well. I’ll be retaining these marks but stripping all others. After these transformations, we can split on whitespace and get a relatively sensible set of tokens.
Here’s Scala code to strip punctuation from lines:
1 2 3 4 5 6 7 8
In order to filter out strings of numbers, we’ll reject all tokens that don’t contain at least one letter. (We could be stricter and reject all tokens that don’t contain at least one letter that isn’t a hex digit, but I decided to be permissive in order to avoid rejecting interesting words that only contain letters
Here’s what our preprocessing pipeline looks like, assuming an RDD of log messages called
1 2 3 4 5 6 7
Now we have a sequence of words for each log message and are ready to train a word2vec model.
1 2 3 4
Note that there are a few things we could be doing in our preprocessing pipeline but aren’t, like using a whitelist (for dictionary words or service names), or rejecting stopwords. This approach is pretty basic, but it produces some interesting results in any case.
Results and conclusions
I evaluated the model by using it to find synonyms for (more or less) arbitrary words that appeared in log messages. Recall that word2vec basically models words by the contexts in which they might appear; informally, synonyms are thus words with similar contexts.
- The top synonyms for
nova(the OpenStack compute service) included
images— all of these are related to running OpenStack compute jobs.
- The top synonyms for
cinder.scheduler.host_manager, and several UUIDs for actual volumes.
- The top synonyms for
- The top synonyms for
- The top synonyms for
IPMI, and the name of an internal project.
These results aren’t earth-shattering — indeed, they won’t even tell you where to get a decent burrito — but they’re interesting, they’re sensible, and they point to the effectiveness of word2vec even given a limited, unidiomatic corpus full of odd word-like tokens. Of course, one can imagine ways to make our preprocessing more robust. Similarly, there are certainly other ways to generate a training corpus for these words: perhaps using the set of all messages for a particular service and severity as a training sentence, using the documentation for the services involved, using format strings present in the source or binaries for the services themselves, or some combination of these.
Semantic modeling of terms in log messages is obviously useful for log analytics: it can be used as part of a pipeline to classify related log messages by topic, in feature engineering for anomaly detection, and to suggest alternate search terms for interactive queries. However, it is a pleasant surprise that we can train a competent word2vec model for understanding log messages from the uncharacteristic utterances that comprise log messages themselves.
Consider the following hypothetical conference session abstract:
Much like major oral surgery, writing talk abstracts is universally acknowledged as difficult and painful. This has never been more true than it is now, in our age of ubiquitous containerization technology. Today’s aggressively overprovisioned, multi-track conferences provide high-throughput information transfer in minimal venue space, but do so at a cost: namely, they impose stingy abstract word limits. The increasing prevalence of novel “lightning talk” tracks presents new challenges for aspiring presenters. Indeed, the time it takes to read a lightning talk abstract may be a substantial fraction of the time it takes to deliver the talk! The confluence of these factors, inter alia, presents an increasingly hostile environment for conference talk submissions in late 2015. Your talk proposals must adapt to this changing landscape or face rejection. Is there a solution?
Hopefully, you recognize some key elements of subpar abstracts that you’ve seen, reviewed, or — maybe, alas — even submitted in this example.
To identify what’s fundamentally wrong with it, we should first consider what the primary rhetorical aims for an abstract are. In particular, an abstract needs to
- provide context so that a general audience can understand that the problem the talk addresses is interesting,
- summarize the content of a talk so that audiences and reviewers know what to expect, and
- motivate conference attendees to put the talk on their schedule (and, more immediately, motivate the program committee to accept the talk).
The abstract above does none of these things, for both stylistic and structural reasons.
The example abstract’s prose is generally clunky, but the main stylistic problem is its overuse of jargon and enthymemes. If you don’t already spend time in the same neighborhoods of the practice as the author, you probably don’t understand all of these terms to mean the same things that the author does or agree with his or her sense of what is “universally acknowledged.” It is easy to fall in to using jargon when you’re deep in a particular problem domain: after all, most of the people you interact with use these words and you all seem to understand each other. However, jargon terms are essentially content-free: they convey nothing new to specialists and are completely opaque to novices. By propping up your writing on these empty terms instead of explaining yourself, you are excluding the cohort of your audience who doesn’t already understand your problem and shamelessly pandering to the cohort that does.1
The main structural problem with the example abstract is that it doesn’t actually make an argument for why the talk is interesting or worth attending; instead, it focuses on emphasizing the problems faced by abstract writers and ends with a cliffhanger. (The cliffhanger strategy not only adds no additional content, it is also especially risky.) A surprising number of abstracts, even accepted ones, suffer because they focus on only one or two of an abstract’s responsibilities, but it is possible to set your abstract up for success by starting from a structure that is designed to cover all of the abstract’s responsibilities.
In 1993, Kent Beck appeared on a panel on how to get a paper accepted at OOPSLA. OOPSLA (now called SPLASH) was an academic conference on research and development related to object-oriented programming languages, systems and environments to support object-oriented programming, and applications developed using these technologies. This is a particularly broad mandate, and because OOPSLA attracted so many papers on a wide range of topics, it had an extremely low acceptance rate. (This is probably why they held a panel on getting papers accepted, but it also makes OOPSLA a good analogy for contemporary practice-focused technical conferences that often cross several areas of specialization, e.g., data processing, distributed computing, and machine learning.)
Beck’s advice is worth reading even if you aren’t writing an academic conference paper. In particular, he suggests that you start by identifying a single “startling sentence” that summarizes your work and can grab the attention of the program committee. From there, Beck advises that you adopt the following four-sentence model to structure your abstract:
- The first sentence is the problem you’re trying to solve,
- The second sentence provides context for the problem or explains its impact,
- The third sentence is the “startling sentence” that is the key insight or contribution of your work, and
- The fourth sentence shows how the key contribution of your work affects the problem.
I’ve used this template in almost every abstract I’ve written for many years, although I sometimes devote more than a single sentence to each step. It has successfully helped me refine abstracts for both industry conference talks and academic papers, and it more or less ensures that each abstract accomplishes what it needs to. (If you’re writing a talk abstract, as opposed to a paper abstract, it’s sometimes also a good idea to add a sentence or two covering what the audience should expect to take away from your talk and why you’re qualified to give it.) If I am sure to consider my audience — first, an overworked program committee member, and second, a jetlagged and overstimulated conference attendee — I am far more likely to explain things clearly and eschew jargon. As a bonus, starting from a fairly rigid structure frees me from wasting time worrying about how best to arrange my prose.
If we avoid jargon and start from Beck’s structure, we can transform the mediocre example abstract from the beginning of this post into something far more effective:
Contemporary multiple-track industry conferences attract speakers and attendees who specialize in distinct but related parts of the practice. Since many authors adopt ineffective patterns from other technical abstracts they’ve read, they may unwittingly submit talk proposals that are at best rhetorically impotent and at worst nonsensical to people who don’t share their specialization. By starting from a simple template, prospective speakers can dramatically improve their chances of being understood, accepted, and attended, while also streamlining the abstract-writing process. Excellent abstracts benefit the entire community, because more people will be motivated to learn about interesting work that is outside of their immediate area of expertise. In this talk, delivered by someone who has delivered many talks without any serious train wrecks and has also helped other people get talks accepted, you’ll learn a straightforward technique for designing abstracts that communicate effectively to a general audience, sell your talk to the program committee, and motivate your peers to attend your talk.
Delivering a technical talk has a lot in common with running a half-marathon or biking a 40k time trial. You’re excited and maybe a little nervous, you’re prepared to go relatively hard for a relatively long time, and you’re acutely aware of the clock. In both situations, you might be tempted to take off right from the gun, diving into your hardest effort (or most technical material), but this is a bad strategy.
By going out too hard in the half-marathon, you’ll be running on adrenaline instead of on your aerobic metabolism, will burn matches by working hard before warming up fully, and ultimately won’t be able to maintain your best possible pace because you’ll be spent by the second half of the race. Similarly, in the talk, your impulse might be to get right to the most elegant and intricate parts of your work immediately after introducing yourself, but if you get there without warming up the audience first, you’ll lose most of them along the way. In both cases, your perception of what you’re doing is warped by energy and nerves; the right pace will feel sluggish and awkward; and starting too fast will put you in a hole that will be nearly impossible to recover from.
Delivering a technical talk successfully has a lot in common with choosing an appropriate pacing strategy for an endurance event: by starting out slower than you think you need to, you’ll be able to go faster at the end. Most runners1 will be able to maintain a higher average pace by doing negative splits. In a race, this means you start out slower than your desired average pace and gradually ramp up over the course of the race so that by the end, you’re going faster than your desired average pace. By starting out easy, your cardiovascular system will warm up, your connective tissue will get used to the stress of pounding on the pavement, and your muscles will start buffering lactic acid; this will reduce muscle fatigue and save your anaerobic energy for the final sprint.
You can apply the general strategy of negative splits to a talk as well. Instead of warming up cold muscles and your aerobic energy systems before making them work, you’re preparing a group of smart people to learn why they should care about your topic before making them think about it too much. Start off slow: provide background, context, and examples. Unless you’re a very experienced speaker, this will feel agonizingly slow at first.
It’s understandable that it might feel remedial and boring to you to explain why your work is relevant. After all, you’re deep in your topic and have probably long since forgotten what it was like to learn about it for the first time. Examples and visual explanations might seem like a waste of time before you get to your clever implementation, elegant proof, or sophisticated model. You have some serious detail to cover, after all! Your audience, however, isn’t prepared for that detail yet. If you skip the warm-up and go straight to that detail, you’ll lose audience engagement, and it’s nearly impossible to recover from that; it’ll certainly prevent you from covering as much as you might have otherwise wanted to.
Remember that your audience is made up of smart people who chose to attend your talk instead of sitting out in the hall. They’d probably rather be learning something from you than halfheartedly reading email. But they also almost certainly don’t know as much about your topic as you do. Ease them in to it, warm them up, and give them plenty of context first. You’ll be able to cover more ground that way.
Pacing in cycling time trials can be a little more complicated depending on the terrain and wind but in general being able to finish stronger than you started is still desirable.↩
I was in Berlin last week for Flink Forward, the inaugural Apache Flink conference. I’m still learning about Flink, and Flink Forward was a great place to learn more. In this post, I’ll share some of what I consider its coolest features and highlight some of the talks I especially enjoyed. Videos of the talks should all be online soon, so you’ll be able to check them out as well.
Apache Flink is a data processing framework for the JVM that is most popular for streaming workloads with high throughput and low latency, although it is also general enough to support batch processing. Flink has a pleasant collection-style API, offers stateful elementwise transformations on streams (think of a
fold function), can be configured to support fault-tolerance with exactly-once delivery, and does all of this while achieving extremely high performance. Flink is especially attractive for use in contemporary multitenant environments because it manages its own memory and thus Flink jobs can run well in containers on overprovisioned systems (where CPU cycles may be relatively abundant but memory may be strictly constrained).
Keynotes and lightning talks
Kostas Tzoumas and Stephan Ewan (both of data Artisans) shared a keynote in which they presented the advancements in Flink 0.10 (to be released soon) and shared the roadmap for the next release, which will be Flink 1.0. The most interesting parts of this keynote for me were the philosophical arguments for the generality and importance of stream processing in contemporary event-driven data applications. Many users of batch-processing systems simulate streaming workflows by explicitly encoding windows in the structure of their input data (e.g., by using one physical file or directory to correspond to a day, month, or year worth of records) or by using various workarounds inspired by technical limitations (e.g., the “lambda architecture” or bespoke but narrowly-applicable stream processors). However, mature stream processing frameworks not only enable a wide range of applications that process live events, but they also are general enough to handle batch workloads as a special case (i.e., by processing a stream with only one window).1
Of course, the workarounds that data engineers have had to adopt to handle streaming data in batch systems are only necessary given an absence of mature stream processing frameworks. The streaming space has improved a great deal recently, and this talk gave a clear argument that Flink was mature enough for demanding and complex applications. Flink offers a flexible treatment of time: events can be processed immediately (one at a time), in windows based on when the events arrived at the processor, or in windows based on when the events were actually generated (even if they arrived out of order). Flink supports failure recovery with exactly-once delivery but also offers extremely high throughput and low latency: a basic Flink stream processing application offers two orders of magnitude more throughput than an equivalent Storm application. Flink also provides a batch-oriented API with a collection-style interface and an optimizing query planner.
After the keynote, there were several lightning talks. Lightning talks at many events are self-contained (and often speculative, provocative, or describing promising work in progress). However, these lightning talks were abbreviated versions of talks on the regular program. In essence, they were ads for talks to see later (think of how academic CS conference talks are really ads for papers to read later). This was a cool idea and definitely helped me navigate a two-track program that was full of interesting abstracts.
Michael Häusler of ResearchGate gave a talk in which he talked about the process of evaluating new data processing frameworks, focusing in particular on determining whether a framework makes simple tasks simple. (Another step, following Alan Kay’s famous formulation, is determining whether or not a framework makes complex tasks possible.) The “simple task” that Häusler set out to solve was finding the top 5 coauthors for every author in a database of publications; he implemented this task in Hive (running on Tez), Hadoop MapReduce, and Flink. Careful readers will note that this is not really a fair fight: SQL and HiveQL do not admit straightforward implementations of top-k queries and MapReduce applications are not known for elegant and terse codebases; indeed, Häusler acknowledged as much. However, it was still impressive to see how little code was necessary to solve this problem with Flink, especially when contrasted with the boilerplate of MapReduce or all of the machinery to implement a user-defined aggregate function to support top-k in Hive. The Flink solution was also twice as fast as the custom MapReduce implementation, which was in turn faster than Hive on Tez.
Declarative Machine Learning with the Samsara DSL
Sebastian Schelter introduced Samsara, a DSL for machine learning and linear algebra. Samsara supports in-memory vectors (both dense and sparse), in-memory matrices, and distributed row matrices, and provides an R-like syntax embedded in Scala for operations. The distributed row matrices are a unique feature of Samsara; they support only a subset of matrix operations (i.e., ones that admit efficient distributed implementations) and go through a process of algebraic optimization (including generating logical and physical plans) to minimize communication during execution. Samsara can target Flink, Spark, and H2O.
Streaming and parallel decision trees in Flink
Training decision trees in batch frameworks requires a view of the entire learning set (and sufficient training data to generate a useful tree). In streaming applications, each event is seen only once, the classifier must be available immediately (even if there is little data to train on) and the classifier should take feedback into account in real time. In this talk, Anwar Rizal of Amadeus presented a technique for training decision trees on streaming data by building and aggregating approximate histograms for incoming features and labels.
Juggling with bits and bytes — how Apache Flink operates on binary data
Applications using the Java heap often exhibit appalling memory efficiency; the heap footprint of Java library data structures can be 75% overhead or more. Since data processing applications frequently create, manipulate, and serialize many objects — some of which may be quite short-lived — there are potentially significant performance pitfalls to using the JVM directly for memory allocation. In this talk, Fabian Hueske of data Artisans presented Flink’s approach: combining a custom memory-management and serialization stack with algorithms that operate directly on compressed data. Flink jobs are thus more memory-efficient than programs that use the Java heap directly, exhibit predictable space usage, and handle running out of memory gracefully by spilling intermediate results to disk. In addition, Flink’s use of database-style algorithms to sort, filter, and join compressed data reduces computation and communication costs.
Stateful Stream Processing
Data processing frameworks like Flink and Spark support collection-style APIs where distributed collections or streams can be processed with operations like
filter, and so on. In addition to these, it is useful to support transformations that include state, analogously to the
fold function on local collections. Of course,
fold by itself is fairly straightforward, but a reliable
fold-style operation that can recover in the face of worker failures is more interesting. In this talk, Márton Balassi and Gábor Hermann presented an overview of several different approaches to supporting reliable stream processing with state: the approaches used by Flink (both versions 0.9.1 and 0.10), Spark Streaming, Samza, and Trident. As one might imagine, Spark Streaming and Samza get a lot of mileage out of delegating to underlying models (immutable RDDs in Spark’s case and a reliable unified log in Samza’s). Flink’s approach of using distributed snapshots exhibits good performance and enables exactly-once semantics, but it also seems simpler to use than alternatives. This has become a recurring theme in my investigation of Flink: technical decisions that are advertised as improving performance (latency, throughput, etc.) also, by happy coincidence, admit a more elegant programming model.
Fault-tolerance and job recovery in Apache Flink
This talk was an apt chaser for the Stateful Stream Processing talk. Till Rohrmann presented Flink’s approaches to checkpointing and recovery, showing how Flink can be configured to support at-most-once delivery (the weakest guarantee), at-least-once delivery, or exactly-once delivery (the strongest guarantee). The basic approach Flink uses for checkpointing operator state is the Chandy-Lamport snapshot algorithm, which enables consistent distributed snapshots in a way that is transparent to the application programmer. This approach also enables configurable tradeoffs between throughput and snapshot interval, but it’s far faster (and nicer to use) than Storm’s approach in any case. Recovering operator state is only part of the fault-tolerance picture, though; Till’s talk also introduced Flink’s approach for supporting a highly-available Job Manager.
Other talks worth checking out
Here are a few talks that I’d like to briefly call out as worth watching:
- “Automatic detection of web trackers”, presented by Vasia Kalavri, was a cool application of graph processing in Flink.
- “Applying Kappa architecture in the telecom industry”, presented by Ignacio Mulas Viela, showed how to put a realistic streaming topology into production.
- In “A tale of squirrels and storms”, Matthias Sax introduced the Storm compatibility layer for Flink, enabling users to run Storm topologies on Flink with minimal code changes.
- Aljoscha Krettek’s talk covered the different approaches Flink supports for defining windows over streams.
- My colleague Suneel Marthi presented on a Flink port of the BigPetStore big data application blueprints.
The data Artisans team and the Flink community clearly put a lot of hard work towards making this a really successful conference. The venue (pictured above) was unique and cool, the overall vibe was friendly and technical, and I didn’t see a single talk that I regretted attending. (This is high praise indeed for a technical conference; I may have been lucky, but I suspect it’s more likely that the committee picked a good program.) I especially appreciated the depth of technical detail in the talks by Flink contributors on the second afternoon, covering both design tradeoffs and implementation decisions. I’m hoping to be back for a future iteration.