In my last post, I showed some applications of source-to-image workflows for data scientists. In this post, I’ll show another: automatically generating a model serving microservice from a git
repository containing a Jupyter notebook that trains a model. The prototype s2i builder I’ll be describing is available here as source or here as an image (check the blog-201810
tag).
Basic constraints
Obviously, practitioners can create notebooks that depend on any combination of packages or data, and that require any sort of oddball execution pattern you can imagine. For the purposes of this prototype, we’re going to be (somewhat) opinionated and impose a few requirements on the notebook:
- The notebook must work properly if all the cells execute in order.
- One of the notebook cells will declare the library dependencies for the notebook as a list of name, version lists called
requirements
, e.g.,requirements = [['numpy', '1.10']]
- The notebook must declare a function called
predictor
, which will return the result of scoring the model on a provided sample.
- The notebook may declare a function called
validator
, which takes a sample and will returnTrue
if the sample provided is of the correct type andFalse
otherwise. The generated service will use this to check if a sample has the right shape before scoring it. (If novalidator
is provided, the generated service will do no error-checking on arguments.)
A running example
Consider a simple example notebook. This notebook has requirements
specified:
= [["numpy", "1.15"], ["scikit-learn", "0.19.2"], ["scipy", "1.0.1"]] requirements
It also trains a model (in this case, simply optimizing 7 cluster centers for random data):
import numpy as np
from sklearn.cluster import KMeans
= 2
DIMENSIONS = np.random.random((40000,DIMENSIONS))
randos = KMeans(n_clusters=7).fit(randos) kmodel
Finally, the notebook also specifies predictor
and validator
methods. (Note that the validator
method is particularly optimistic – you’d want to do something more robust in production.)
def predictor(x):
return kmodel.predict([x])[0]
def validator(x):
return len(x) == DIMENSIONS
What the builder does
Our goal with a source-to-image builder is to turn this (indeed, any notebook satisfying the constraints mentioned above) into a microservice automatically. This service will run a basic application skeleton that exposes the model trained by the notebook on a REST endpoint. Here’s a high-level overview of how my prototype builder accomplishes this:
- It preprocesses the input notebook twice, once to generate a script that produces a requirements file from the
requirements
variable in the notebook and once to generate a script that produces a serialized model from the contents of the notebook, - It runs the first script, generating a
requirements.txt
file, which it then uses to install the dependencies of the notebook and the model service in a new virtual environment (which the model service will ultimately run under), and - It runs the second script, which executes every cell of the notebook in order and then captures and serializes the
predictor
andvalidator
functions to a file.
The model service itself is a very simple Flask application that runs in the virtual Python environment created from the notebook’s requirements and reads the serialized model generated after executing the notebook. In the case of our running example, it would take a JSON array POST
ed to /predict
and return the number of the closest cluster center.
Future work and improvements
The goal of the prototype service is to show that it is possible to automatically convert notebooks that train predictive models to services that expose those models to clients. There are several ways in which the prototype could be improved:
Deploying a more robust service: currently, the model is wrapped in a simple Flask application running in a standalone (or development) server. Wrapping a model in a Flask application is essentially a running joke in the machine learning community because it’s obviously imperfect but it’s ubiquitous in any case. While Flask itself offers an attractive set of tradeoffs for developing microservices, the Flask development server is not appropriate for production deployments; other options would be better.
Serving a single prediction at once with a HTTP roundtrip and JSON serialization may not meet the latency or throughput requirements of the most demanding intelligent applications. Providing multiple service backends can address this problem: a more sophisticated builder could use the same source notebook to generate several services, e.g., a batch scoring endpoint, a service that consumes samples from a messaging bus and writes predictions to another, or even a service that delivers a signed, serialized model for direct execution within another application component.
The current prototype builder image is built up from the Fedora 27 source-to-image base image; on this base, it then installs Python and a bunch of packages to make it possible to execute Jupyter notebooks. The generated service image also installs its extra requirements in a virtual environment, but it retains some baggage from the builder image.1 A multi-stage build would make it possible to jettison dependencies only necessary for actually executing the notebook and building the image (in particular, Jupyter itself) while retaining only those dependencies necessary to actually execute the model.
Finally, a multi-stage build would enable cleverer dependency handling. The requirements to run any notebook are a subset of the requirements to run a particular notebook from start to finish, but the requirements to evaluate a model scoring function or sample validation function likely do not include all of the packages necessary to run the whole notebook (or even all of the packages necessary to run any notebook at all). By identifying only the dependencies necessary for model serving – perhaps even automatically – the serving image can be smaller and simpler.
Footnotes
The virtual environment is necessary so that the builder image can run without special privileges – that is, it need only write to the application directory to update the virtual environment. If we needed to update system packages, we’d need to run the builder image as
root
.↩︎