Using Python Notebooks, Keras and TensorFlow in SKIL v1.0.1

With the 1.0.1 release, the SKIL platform lets you train and host Python-based notebooks and models. That is, SKIL supports machine learning in the Python ecosystem and on the JVM, bridging the two to solve infrastructure problems for data scientists.


You can now import TensorFlow models directly into SKIL's AI model server. Model import works both for models created within a SKIL notebook and for TensorFlow models created elsewhere.

Building a Keras-TensorFlow Model in SKIL

In this post, we’ll use a notebook based on the well known Python script from Francois Chollet for training a convolutional neural net on the MNIST dataset.

The first step is to create a new workspace to hold our experiment:


The second step is to create a new experiment inside the workspace to organize and manage the Python notebook experiment.


Clicking on "New Experiment", as shown above, brings up a new notebook that will support both Python-based notebooks as well as those using Scala. We've already converted the MNIST Keras Python script into a JSON notebook in the Github repository for this example. You can git clone the repo at:

From the workspace screen, create a new experiment while importing the JSON notebook included in the repo above at:

Once you have cloned the SKIL_Examples repo, the file will be on your local disk. You can import it into the experiment using the "create new experiment" dialog window by selecting the option "Choose an existing notebook", shown below.


This allows us to include an existing notebook as the starting point for our experiment. In this example's notebook, the MNIST dataset is already downloaded and prepared as part of the code. The MNIST dataset comes prepped and ready as a singular file and some versions are 28x28 pixel PNG images. To make it easy to test the model in the model server, we include just a small sample of MNIST digits in the repo. These sample images are located at:

With new experiment created and the Keras MNIST notebook loaded and ready to go, press the “Run all paragraphs” button in the Zeppelin notebook, as shown below.


Each paragraph in the notebook will send lines of output to the console area attached to that paragraph. The main training paragraph of the Python notebook will have output similar to what you see below.

60000 train samples
10000 test samples
Layer (type)                 Output Shape              Param #  
dense_4 (Dense)              (None, 512)               401920   
dropout_3 (Dropout)          (None, 512)               0        
dense_5 (Dense)              (None, 512)               262656   
dropout_4 (Dropout)          (None, 512)               0        
dense_6 (Dense)              (None, 10)                5130     
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
Train on 60000 samples, validate on 10000 samples
Epoch 1/5

  128/60000 [..............................] - ETA: 28s - loss: 2.4162 - acc: 0.1406

Some notable lines in this notebook are:

model_id = skilContext.addModelToExperiment(z, model, 'mnist_5epoch')

skilContext.addEvaluationToModel(z, model_id, model, x_test, y_test, name="kears_5_epoch")

These lines show how the SKILContext system works inside Python notebooks, just like it does in Scala-based notebooks. So you can work with the rest of SKIL consistently across multiple deep-learning training systems to deploy and integrate the models.

The SKILContext integration means you don't have to worry about where you saved the MNIST model file or keep up with different runs of the model script. SKIL handles and organizes that.

Deploying a TensorFlow Model to SKIL

This section is about taking a newly created MNIST model and deploying it to the model server, so that external applications can request handwritten-digit classifications via a REST endpoint.

The SKIL model server allows SKIL to store deep learning models and integrate them with user applications. It stores all model revisions for a given experiment, and lets you choose which model you’d like to “deploy” or mark as “active”. Deploying a model means that it will be the model that serves the predictions to production applications who are querying the REST endpoint.

This gives developers and administrators the ability to separate application logic and deployment from model management and rollback mechanics. You can treat a model the same way you'd treat a relational database table, controlling updates, deletes, and rollbacks without also needing to update each individual application that uses the model/table.

Let’s look at the model generated from the notebook that you ran in this tutorial, which is now indexed in the model server.


As with Scala Zeppelin notebooks, the SKILContext object sent the evaluation results to be archived in the model server. To deploy this model, simply click on the “deploy wizard” button that brings up the pop-up panel below.


Within SKIL, a “deployment” is a “logical group of models, transforms, and KNN endpoints”. It helps us logically group deployed components to track what goes together and to manage the system better. As explained in the dialog, this wizard will make your model available via a REST API.


Switching gears, we'll put together a Java REST API client to get classifications for MNIST digists with a remote application.

Querying the Model Server for MNIST Classifications via REST

For this example, we created an example Java client here.

This code was downloaded when you git cloned the SKIL Examples:

To run the client application, you need to use Maven to build and package the JAR file that you'll run from the command line to send the image to the model server for classification. You can do this by cd'ing into the directory:


and then running Maven with the command:

mvn package

This will generate a JAR file called skil-example-mnist-tf-1.0.0.jar in the ./target subdirectory of the project.

Understanding the Client Code

There are two sections of the code that affect sending images over a REST API with SKIL. The first deals with basic authentication with the SKIL model server, and then using the token to make further REST requests for image classification.

The included Authorization class encapsulates some basic REST logic to send a username and password to the SKIL model server, to get an authorization token to use in the classification REST request. It makes a basic REST request like this:

authToken ="http://{0}:{1}/login", host, port))
            .header("accept", "application/json")
            .header("Content-Type", "application/json")
            .body(new JSONObject() //Using this because the field functions couldn't get translated to an acceptable json
                    .put("userId", userId)
                    .put("password", password)

This retrieves the authorization token necessary for the next REST request, which involves sending this token and the base64-encoded image bytes via a REST POST request.

The next section loads an image by its local URI and converts it into an INDArray, reshaping the data, and then finally base64-encoding the bytes for REST transport.

ImageWritable img = imgTransformProcess.transformFileUriToInput( imageFile.toURI() );

finalRecord = imgTransformProcess.executeArray( img ).reshape(1, 28 * 28);


String imgBase64 = Nd4jBase64.base64String(finalRecord);

The last snippet of code pulls this information (auth token and base64 image data) into a new REST call and sends it to the SKIL model server.

Running the Client Code

Let's run the client code to make some classifications. We'll assume here that SKIL is running locally (or has its ports mapped locally), that you've deployed the model from earlier in this example, and that the model endpoint address is:


To send an image for classification to the SKIL model server via REST, use the included client example that is executed from the command line like this:

java -jar ./target/skil-example-mnist-tf-1.0.0.jar --input [image file location] --endpoint [skil endpoint URI]

To send a blank image to the model server to test out a non-MNIST image:

java -jar ./target/skil-example-mnist-tf-1.0.0.jar --input blank --endpoint http://localhost:9008/endpoints/mnist/model/mnistmodel/default/

This should return the following (along with some other log debug lines):

classification return: {"maxOutcomes":["5"],"rankedOutcomes":[["5","8","3","9","2","7","4","1","6","0"]],"probabilities":[[0.12396843731403351,0.1205655038356781,0.10955488681793213,0.10501493513584137,0.09488383680582047,0.09272368997335434,0.09050352871417999,0.08859185874462128,0.08754267543554306,0.08665058016777039]]}

Where we can see the "5" digit got the highest classification probability, with 0.1239..., but this was not much better than the MNIST "8-digit" label with a probability of 0.1205....

Effectively this model returned low probabilities for each digit that were fairly close in value, which means the model judged that this image didn't resemble any of labels quite enough to merit a confident prediction.

Now let's send a real image from MNIST to the model server. A few MNIST digit images (28x28 pixel PNGs) are included in the example, and we can send those for classification as well:

java -jar ./target/skil-example-mnist-tf-1.0.0.jar --input ./target/classes/mnist_28x28/3/270.png --endpoint http://localhost:9008/endpoints/mnist/model/mnistmodel/default/

The request with the MNIST "3 digit" should return something like:

classification return: {"maxOutcomes":["3"],"rankedOutcomes":[["3","9","8","7","6","5","4","2","1","0"]],"probabilities":[[1,0,0,0,0,0,0,0,0,0]]}

In this case, the model is quite confident this is an image of a handwritten "3", returning a 1.0 probability and 0.0 for all other digits.


In this article we've shown how to import external notebooks, use Python and Keras in a SKIL notebook, and then use Keras models hosted on the SKIL model server to perform remote client REST classification requests against the deployed model.

The SKIL platform helps data scientists and devops teams collaborate on experiments and deploy deep learning models to production faster than any other platform.