A quick guide on how to deploy a depth estimator at scale without creating a webservice, docker, and setting up the infrastructure.

Introduction

Welcome to another tutorial of Develop, Deploy, and Integrate series. The main goal of that series is to uncover the fastest and easiest possible way to deploy a deep learning model at scale without any special configuration, long process of infrastructure setup, and difficult monitoring when the model is live. Just a simple way from model to actual use-case... in minutes!

All tutorials from the series are divided into three quick steps so that they can be easily applied to other machine learning models without any changes.

In this particular tutorial, we will deploy depth estimator model that is an essential part of every 3D vision system.

Inspired? Let's go!

💡 Explore: You can also explore the tutorial on how to deploy yolov5 model or how to deploy deoldify model.

Step 1: Develop a model

In the first step, we need to build and train a model so that we are able to perform inference. After that upload, the code to the repository, while weights of the model to some external storage if necessary.

A depth estimator model requires an input RGB image and outputs a depth image. The depth image includes information about the distance of the objects in the image from the viewpoint, which is usually the camera taking the picture. That technology is mainly used in self-driving cars, shadow mapping in computer graphics, and robot-assisted surgeries.

The two images below provide a clear illustration of depth estimation in practice.

deploy a depth estimator
Depth estimator input image (left) and output (right)

Since building and training, such a model is a very long process, especially when you want to obtain plausible results. Therefore, we will use the trained MiDaS v2.1 model introduced by René Ranftl in his paper, where he describes a new tool that enables mixing multiple datasets during training, even if their annotations are incompatible. MiDaS v2.1, for instance, was trained on 10 different datasets with multi-objective optimization.

In order to run a model, you don't need to copy all files from the main repo, just the ones that are needed for inference. We have already done it for you, so you can either copy from our repo or the original one.

The last thing that you need to do is place the whole code in some git repo.

Step 2: Deploy depth estimator

The second step we will actually deploy a model.

The whole process of AI model deployment is highly influenced by the type of task and later the evaluation metrics.

For instance, a cooking recommendation engine needs to deliver best suited to your needs recipes while you are on the website (in seconds!). In this case, the most important metric is latency that needs to very low. Therefore for this task, you would consider choosing the deployment that works in real-time.

However, when you build a super-resolution engine, that improves the quality of the video, the main metric that you are looking for is some kind of image quality assessment. That is why batch deployment is a better choice because the response time does not play a big role in this case.

In conclusion, you have at least a grasp of how AI model deployment is a broad topic. As a result, in our case, we will go for real-time model deployment and we will use Syndicai Platform to easily deploy a model, without taking care of scalability, versioning, monitoring, and security.

Prepare a model

Our model is already trained and uploaded to the Git repository. Now, we need to somehow define how the model will take input and return output when being a webservice.

Thankfully, Syndicai does not require creating a webservice or docker. You only need to create two files syndicai.py & requirements.txt and place them in the main directory of your repository.

The first file, syndicai.py, consists of PythonPredictor python class responsible for model prediction. In this case, both input and output are in the base64 format, and the content of the files looks as follows.

import cv2
import numpy as np
import urllib.request
import matplotlib.pyplot as plt

import tensorflow as tf
import tensorflow_hub as hub

from utils import *


class PythonPredictor:

def __init__(self, config):
# the runtime initialization will not allocate all memory on the device to avoid out of GPU memory
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
#tf.config.experimental.set_memory_growth(gpu, True)
tf.config.experimental.set_virtual_device_configuration(gpu,
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4000)])

# load model
self.module = hub.load("https://tfhub.dev/intel/midas/v2_1_small/1", tags=['serve'])

def predict(self, payload):
# convert base64 to OpenCv format
img = b64_to_image(payload["image_b64"]) / 255.0

img_resized = tf.image.resize(img, [256,256], method='bicubic', preserve_aspect_ratio=False)
img_resized = tf.transpose(img_resized, [2, 0, 1])
img_input = img_resized.numpy()
reshape_img = img_input.reshape(1,3,256,256)
tensor = tf.convert_to_tensor(reshape_img, dtype=tf.float32)

# run a model
output = self.module.signatures['serving_default'](tensor)
prediction = output['default'].numpy()
prediction = prediction.reshape(256, 256)

# output file
prediction = cv2.resize(prediction, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC)
depth_min = prediction.min()
depth_max = prediction.max()
img_out = (255 * (prediction - depth_min) / (depth_max - depth_min)).astype("uint8")
heatmap_img = cv2.applyColorMap(img_out, cv2.COLORMAP_HOT)

return image_to_base64(heatmap_img)

The second file, required to recreate the model environment. It consists of a list of libraries and their versions.

numpy==1.19.4
matplotlib==3.2.2
Pillow==7.0.0
tensorflow==2.4.0
tensorflow-hub==0.10.0

Connect repo

After you push additional files to your repo, you are ready to go with the deployment which just about connecting a repo to the Syndicai platform.

You can use your own repository for this, or the one with the Syndicai's sample models that already contains a prepared depth estimation model.

In order to connect your repo just go to https://syndicai.co/, login, click New Model on the Overview page, and follow the steps in the form.

deploy a model via syndicai
Connect your repo to Syndicai Platform

As soon as you finish, you will notice that the infrastructure will start building. You will have to wait a few minutes for the model to become Active.

deploy a depth estimator
Deploy a depth estimator via Syndicai in just a few clicks

If you see the blue badge Active next to the name of your model it means that your model is deployed to production in the s C a L a B L E way!

Step 3: Integrate depth estimator

Now you deserve applause!

After you deployed your model, the next step is to access it using REST API. You can do it either via Platform or via terminal.

In order to test it out quickly on the Platform, go to the model Overview page and paste the sample input script in the Run a model section.

{
"image_b64": "/9j/4AAQSkZJRgABAQEASABIAAD/4gIcSUNDX1BST0ZJT..."
}

Remember that your model has to be Active in order to work! As the output, you should also get base64.

If you get a correct response you are ready to go with the model integration. Go to the model Integrate page and use the code snippet to implement the REST API in your website, mobile app, or some platform.

For the purpose of that kind of tutorials we have managed to create a sample web app written in React that consumes deployed models and is the perfect example of the showcase your models. You can try it on the Syndicai Showcase page and get a feeling of what we are talking about.

Run a depth estimator in simple app

In addition, you can fork the repository with the showcase page, because the whole code is open-sourced.

Conclusion

To sum up, you've done a great job!

In the above tutorial, you had a chance to learn how to deploy depth estimator model. As you could see, the whole process does not require you to create a webservice, docker, and infrastructure. Syndicai do it for you so that you can iterate faster and focus on what's important - model building and training!

* * *

If you found that material helpful, have some comments, or want to share some ideas for the next one - don't hesitate to drop us a line via slack or mail. We would love to hear your feedback!

You might like these

Deploy yolov5 model in a few simple clicks

Tutorial
February 2, 2022
by
Michał Zmysłowski

Train yolov5. A quick guide from a model to the actual use case.

Tutorial
February 2, 2022
by
Michał Zmysłowski