How to deploy a Tensorflow model in minutes?

Marcin Laskowski
deploy tensorflow in minutes

Introduction

As we all know AI is the major driving force behind many of the applications we see today. However, in order to put that AI to those applications, we need to go through an Exploratory Data Analysis (EDA) phase, and when a model is working deploy & integrate it with the app. When it comes to the last step (deployment & integration) it is not an easy task if you want to make it ready for production.

What if there would be an easy and fast way to go from a trained model to actual use-case with a click of a button? What if that process would be a production-ready one?

In this article, we will show you how to deploy a TensorFlow model in three quick steps.

Step 1: Develop a Tensorflow model

For the purpose of this tutorial, we will deploy a Golnaz and Honglak implementation of the Style Transfer (also called Neural Style Transfer).

Neural style transfer is an optimization technique that takes as input two images (a content image with style reference one) and combines them together so the output image looks like it’s “painted” in the style of the reference image.

The newest approach presented in the paper investigates a method that combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks so that the algorithm is much faster, and can work in real-time.

deploy style transfer
The input of the model is Content image and Style reference, while the output is the combination both images.

We have already prepared the Style Transfer model on our GitHub repository. You don’t have to do anything with the repository for now.

Step 2: Deploy a Tensorflow model

In the traditional workflow of Machine Learning model deployment, you need to go through several steps (create a webservice, build a docker and serve on Kubernetes cluster).

Syndicai takes care of all those steps. You just need to connect the git repository with your model and the REST API will be created automatically with one click. Moreover, Syndicai takes care of the scalability of resources. The resulting API offers great flexibility because you can connect it to any device.

deploy ai at scale
AI model deployment in the traditional way (on top) vs Syndicai way (on the bottom)

You can also try out to deploy a Keras model.

Prepare a repository

When model Apart from putting your model in the GitHub repository, you have to upload two additional files there: requirements.txt and syndicai.py.

requirements.txt – a file with all libraries and frameworks needed to recreate model’s environment

numpy==1.19.4
matplotlib==3.2.2
Pillow==7.0.0
tensorflow==2.4.0
tensorflow-hub==0.10.0

syndicai.py – main file with the PythonPredictor class responsible for model prediction.

import os
import io
import base64
import functools

from PIL import Image
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub

from helpers import *


class PythonPredictor:

    def __init__(self, config):
        # Define style image
        self.style_image_url = 'https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg'
        # Import TF-Hub module
        hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
        self.hub_module = hub.load(hub_handle)

    def predict(self, payload):
        # Define content image
        content_image_url = payload["url"]

        # Load images
        content_img_size = (500, 500)
        style_img_size = (300, 300)

        style_image = load_image(self.style_image_url, style_img_size)
        content_image = load_image(content_image_url, content_img_size)
        style_image = tf.nn.avg_pool(
            style_image, ksize=[3, 3], strides=[1, 1], padding='SAME')

        # Stylize content image with given style image.
        outputs = self.hub_module(tf.constant(content_image),
                                  tf.constant(style_image))
        stylized_image = outputs[0]

        # get PIL image and convert to base64
        img = Image.fromarray(np.uint8(stylized_image.numpy()[0] * 255))
        im_file = io.BytesIO()
        img.save(im_file, format="PNG")
        im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")

        return im_bytes

These two files are necessary for the Syndicai to be able to recreate the environment and know which function to use for prediction.

Connect the repository to Syndicai

When we have the GitHub repository with requirements.txt and syndicai.py ready, we can proceed to connect it to the Syndicai platform. In order to that, go to https://app.syndicai.co/, login, click New Model on the Overview page. You will be redirected to the quick form. Follow the steps, and as soon as you finish, the infrastructure will start building. You will need to wait a couple of minutes for the model to become Active.

deploy style transfer
Deployed Style Transfer model should have build status “success” and badge “Active” next to the name

For more information about the model preparation or deployment process go to Syndicai Docs.

Step 3: Integrate a Tensorflow model

You’ve done it!

Your model is deployed, and your REST API is ready. In order to perform a quick test, just copy & paste a sample input script in the model run section.

{
    "url": "https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg"
}

Remember that model needs to be Active to make it work!

If everything works fine, you can now connect the API with any device or service. As an example, you can go to the Showcase page and explore the sample implementation of the model.

Style Transfer implementation: https://showcase.syndicai.co/style-transfer

Summary

You had a chance to see how to deploy tensorflow model in minutes. Syndicai allows you to deploy and integrate AI models at scale in a simple and fast way. You don’t need to setup the infrastructure or take care of scalability, Syndicai Platform will do it for you.

If you found that useful, or you want to get more of those types of tutorials – please drop us a line by mail or catch us on slack.