/ SKIL

# Getting Started with SKIL Endpoints

SKIL lets developers interact with it through endpoints as well as the GUI. We'll explore endpoints in this post.

### What is an endpoint?

An endpoint is a communications channel between a client application and the SKIL server. Anything you've deployed on the SKIL server (models, transforms, KNN) is linked to client applications through endpoints. In other words, endpoints are the connection points between your client and the SKIL server. With endpoints, you can also carry out other operations such as login, listing deployments, and adding and deleting entities.

SKIL endpoints are REST endpoints: URL links to which you’ll pass http requests (GET, POST, UPDATE, DELETE) along with the expected data.

### Getting started

If you're new to SKIL, take a look at the quickstart guide to get SKIL CE running.

The source code can be found here.

host:port refers to the address where SKIL is deployed (typically localhost:9008 or 127.0.0.1:9008)

The requests headers should have the following key-value pairs:

authorization: Bearer <auth_token> (See login section for obtaining <auth_token>)
Content-Type: application/json
accept: application/json


### Specific Endpoints covered

In this blog, we'll explore the following endpoints:

2. Deployment
3. Model
4. Transform
5. KNN

Not all requests require authorization, but to get a prediction returned, you need to pass along an authorization token in the request headers as authorization: Bearer <auth_token>.

The authorization token is a JWT token that stores your credentials as a user in an encrypted form and lets the server know if you're an authorized entity to carry out the requested operation.

To obtain an authorization token, you need to do a POST request to http://host:port/login with the user ID and password. The request header contain this header: Content-Type: applicaition/json. The request body should look like this:

{
}


If this process is successful, you'll receive a JWT token in the following format:

{
"token": <authorization_token>
}


Code Sample

#### 2. Deployment

A SKIL deployment is “a logical group of models, transforms, and KNN endpoints”. DL4J models are usually associated with data pipeline transforms that convert data to a format that a neural network can ingest. So a SKIL deployment just deploys the model and its transforms and KNNs and exposes it through endpoints to client apps.

The following shows how to use endpoints related to a SKIL deployment:

###### A. Listing the deployments

Send a GET request to http://host:port/deployments to obtain a list of all the deployments. In SKIL CE, you are limited to two deployments at a time.

The response will look like this:

[
{
"id":"0",
"name":"OurFirstDeployment",
"deploymentSlug":"ourfirstdeployment",
"status":"Partially Deployed",
"body":{
"models":[
{
"id":"0",
"name":"lstm_model",
"status":"started",
"scale":1,
"uri":[
"ourfirstdeployment/model/lstmmodel/default"
]
},
...
],
"transforms":[

],
"knn":[

]
}
},
...
]


Here, the response contains some general details alongside a couple arrays that include the information related to models, transforms and KNNs.

###### B. Get deployment by ID

This is similar to obtaining all information about the deployments. We can get the deployment by id as: GET -> http://host:port/deployment/{id}

Replace {id} with the deployment id and you'll have a response looking like this:

{
"id":"5",
"name":"New deployment",
"deploymentSlug":"new-deployment",
"status":"Not Deployed",
"body":{
"models":[

],
"transforms":[

],
"knn":[

]
}
}


You'll get a null in response if the id was not found.

POST -> http://host:port/deployment

The post body should contain the deployment name as:

{
"name": <deployment_name>
}


Upon successful addition, the added deployment will be sent in the response:

{
"id":"6",
"name":"new deployment",
"deploymentSlug":"new-deployment",
"status":"Not Deployed",
"body":{
"models":[

],
"transforms":[

],
"knn":[

]
}
}

###### D. Get all deployment models

GET -> http://host:port/deployment/{deploymentID}/models

Replace {deploymentID} with the deployment ID, and you'll get something like this:

[
{
"id":0,
"created":1511897328477,
"updated":1512605963043,
"modelType":"model",
"deploymentId":0,
"name":"lstm_model",
"scale":1,
"fileLocation":"file:///tmp/skilmodels/model/ourfirstdeployment_lstmmodel",
"state":"started",
"jvmArgs":null,
"subType":null,
"labelsFileLocation":null,
"extraArgs":null,
"launchPolicy":{
"@class":"io.skymind.deployment.launchpolicy.DefaultLaunchPolicy",
"maxFailuresQty":3,
"maxFailuresMs":300000
},
"modelState":"STARTED"
},
...
]


Code Sample

#### 3. Model

Models are contained within a deployment, so you'll have to specify the deployment with each model-related request. Let's see what endpoints are available for models.

Currently, when adding a model into a deployment, it's assumed that the model file is already present on the server. We have to specify the file location of the model (normally, a zip file) in the request parameter along with some other important details. You can have up to two models in a SKIL CE deployment. Below is the request format for adding a model to a deployment:

POST -> http://host:port/deployment/{deploymentID}/model

Request Body

{
"name": (Model's display name),
"scale": (Number of servers for serving the model),
"uri": (Endpoints name list to be exposed. Format -> :deploymentSlug/:modelType/:modelNameLowerCased/v1),
"modelType": (Can be "transform", "knn", "model". Here, it's "model"),
"fileLocation": (Model file location in the server),
"jvmArgs": (Such as -Xmx1g),
"subType": (Such as CSV/IMAGE for datavec. 'Similarity function' name for KNN),
"labelsFileLocation": (Path of file with label names to import),
"extraArgs": (JSON string with extra parameters. Such as: '{"inverted": true}'),
"etlJson": (Transform JSON),
"inputNames": (Names of input placeholders for TensorFlow or ONNX models)
"outputNames": (Names of output placeholders for TensorFlow or ONNX models)
}


In the above request body format, a deployment slug is a URL-safe name for the deployment name; for example, a name with spaces removed etc.

The scale refers to the number of nodes you want to handle your model's requests. You can just increase the number to as many nodes as you need for model requests to be distributed to. This feature is only available in SKIL EE.

Sample Request

{
"name":"new_model",
"modelType":"model",
"fileLocation":"file:///var/skil/storage/models/d8...eb28.zip",
"scale":"1",
"uri":[
"ourfirstdeployment/model/new_model/default"
]
}


Response

{
"id":3,
"created":1513355009071,
"updated":null,
"modelType":"model",
"deploymentId":0,
"name":"new_model",
"scale":1,
"fileLocation":"file:///tmp/skilmodels/model/ourfirstdeployment_newmodel",
"state":"stopped",
"jvmArgs":null,
"subType":null,
"labelsFileLocation":null,
"extraArgs":null,
"launchPolicy":{
"@class":"io.skymind.deployment.launchpolicy.DefaultLaunchPolicy",
"maxFailuresQty":3,
"maxFailuresMs":300000
},
"modelState":"STOPPED"
}

###### B. Reimport a model

If you want to update an existing model with a new model, you can reimport it and swap the new one in. The request format is the same as when you add a model -- the only difference is in the POST URL format:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}

Sample Request

{
"name":"new_model4",
"modelType": "model",
"fileLocation":"file:///var/skil/storage/models/d8bf...eb28.zip"
}


Response

{
"id":0,
"created":1511897328477,
"updated":1513355643929,
"modelType":"model",
"deploymentId":0,
"name":"lstm_model",
"scale":1,
"fileLocation":"file:///tmp/skilmodels/model/ourfirstdeployment_newmodel4",
"state":"started",
"jvmArgs":null,
"subType":null,
"labelsFileLocation":null,
"extraArgs":null,
"launchPolicy":{
"@class":"io.skymind.deployment.launchpolicy.DefaultLaunchPolicy",
"maxFailuresQty":3,
"maxFailuresMs":300000
},
"modelState":"STARTED"
}


In this way, the previous model will be updated with the new one.

###### C. Delete a model

To delete a model, you just need to send a DELETE request to the following URL format:

DELETE -> http://host:port/deployment/{deploymentID}/model/{modelID}

The response will look like this:

{
"status":"Removed Model"
}


{
}

###### D. Setting model state

A model state can be set to the following:

1. start (to start serving the model)
2. stop (to stop serving the model)

To do this you need to:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}/state

Request Body

{
"state": "stop"
}


When that succeeds, the response will be:

{
"id":0,
...,
"modelState":"STOPPED"
}


Code Sample

#### 4. Transform

Using transform endpoints is almost like using model endpoints. You need to change the modelType from model to transform.

Also, you need to specify if the transform is an image transform or a CSV transform. You can specify that in the subType field either by putting IMAGE or CSV, respectively.

For adding a transform, the endpoint is:
POST -> http://host:port/deployment/{deploymentID}/model

A sample request will look like this:

{
"name": "new_transform",
"modelType": "transform",
"subType": "IMAGE",
"fileLocation": "file:///var/skil/storage/.../new_transform.json",
"scale": 1,
"uri": [
"ourfirstdeployment/model/new_transform/default"
]
}


And a response will give you the transform that it created alongside other calculated details, such as created date, updated data, state etc.

Similarly, for reimporting the transform, you need to:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}

Just make sure that the id is for a transform process; i.e. you shouldn't reimport a transform with KNN.

Similarly, to delete a transform:

DELETE -> http://host:port/deployment/{deploymentID}/model/{modelID}

and to set the state:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}/state
Code Sample

#### 5. KNN

The same is true for KNN. The request endpoints and request formats are similar to the model and transform types as explained above.

To add a KNN, the modelType will be knn and the subType can be a distance function such as "euclidean", "manhattan", "cosinedistance", "jaccard".

A sample request will look like this:

POST -> http://host:port/deployment/{deploymentID}/model

{
"name": "new_knn",
"modelType": "knn",
"subType": "euclidean",
"fileLocation": "file:///var/skil/storage/.../new_knn.bin",
"scale": 1,
"uri": [
"ourfirstdeployment/model/new_knn/default"
]
}


Similarly, for reimporting a KNN, you need to:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}

Just make sure that the id is for a KNN; i.e. you shouldn't reimport with a KNN type to another model type.

Similarly, to delete a KNN:

DELETE -> http://host:port/deployment/{deploymentID}/model/{modelID}

and to set the state:

POST -> http://host:port/deployment/{deploymentID}/model/{modelID}/state
Code Sample

#### Data Normalization

To use a normalization with the predict service, you'd need to save the normalizer with the model via org.deeplearning4j.util.ModelSerializer#addNormalizerToModel(File, Normalizer).

Then in the predict request, alongside the input array, pass needsPreProcessing: true.

See the quickstart guide, for a reference to prediction.

### What's next?

Read here about the SKIL deployment client, an intuitive client interface between the SKIL server and your client app.