Let's explore a simple and fast way to deploy a Keras model at scale in minutes without docker and Kubernetes setup. We also have a tutorial on how to deploy a PyTorch model.
Did you know that almost 90% of Machine Learning courses available on the internet ends up with model training, and do not cover AI model deployment? Models that are trained, but not deployed brings no value to business. However, the whole process is not easy and requires you to possess a deep tech knowledge in this area.
Did you ever wonder what if there would be an easy and fast way to go from a trained model to an actual use-case with a click of a button. What if that process would be a production-ready one?
You don't have to think about it. Syndicai takes care of all the above aspects with no config! Isn't it amazing?
In this article, we will show you how to deploy a Keras model in three simple steps.
For the purpose of this tutorial, we will go for the Computer Vision task and build a Face Mask Detector. It is one of the hottest models in the last couple of months due to the covid19.
The main goal of that algorithm is to detect and classify whether a person is properly wearing a mask on their face, or not. The whole model is computationally efficient and fast due to MobileNetV2 architecture.
It is based on an inverted residual structure where the residual connections are between the bottleneck layers. As a whole, the architecture of MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers.
We have already prepared the Face Mask Detector model on our GitHub repository. You don’t have to do anything with the repository for now.
In the traditional workflow of Machine Learning model deployment you need to go through several steps.
Syndicai takes care of all those steps. You just need to connect the git repository with your model and the REST API will be created automatically with one click. Moreover, it takes care of the scalability of resources. The resulting API offers great flexibility because you can connect it to any device.
You can also try out to deploy a Pytorch model.
When deploying a model first you need to upload two additional files to your model repository: requirements.txt
and syndicai.py
.
requirements.txt
- a file with all libraries and frameworks needed to recreate model's environment
tensorflow==1.15.2
keras==2.3.1
imutils==0.5.3
numpy==1.18.2
opencv-python==4.2.0.*
matplotlib==3.2.1
argparse==1.1
scipy==1.4.1
scikit-learn==0.23.1
pillow==7.2.0
syndicai.py
- main file with the PythonPredictor
python class responsible for model prediction.
import os, cv2
import numpy as np
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
from utils import url_to_image, b64_to_image, image_to_base64
args = {
"image": "sample_data/out.jpg",
"face": "face_detector",
"model": "model/mask_detector.model",
"confidence": 0.5,
}
class PythonPredictor:
def __init__(self, config):
# load our serialized face detector model from disk
print("[INFO] loading face detector model...")
prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
weightsPath = os.path.sep.join([args["face"],
"res10_300x300_ssd_iter_140000.caffemodel"])
self.net = cv2.dnn.readNet(prototxtPath, weightsPath)
# load the face mask detector model from disk
print("[INFO] loading face mask detector model...")
self.model = load_model(args["model"])
def predict(self, payload):
# Get image from url
try:
image = url_to_image(payload["image_url"])
except:
image = b64_to_image(payload["image_b64"])
orig = image.copy()
(h, w) = image.shape[:2]
# construct a blob from the image
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
(104.0, 177.0, 123.0))
# pass the blob through the network and obtain the face detections
print("[INFO] computing face detections...")
self.net.setInput(blob)
detections = self.net.forward()
# loop over the detections
for i in range(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with
# the detection
confidence = detections[0, 0, i, 2]
# filter out weak detections by ensuring the confidence is
# greater than the minimum confidence
if confidence > args["confidence"]:
# compute the (x, y)-coordinates of the bounding box for
# the object
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
# ensure the bounding boxes fall within the dimensions of
# the frame
(startX, startY) = (max(0, startX), max(0, startY))
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
# extract the face ROI, convert it from BGR to RGB channel
# ordering, resize it to 224x224, and preprocess it
face = image[startY:endY, startX:endX]
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
face = cv2.resize(face, (224, 224))
face = img_to_array(face)
face = preprocess_input(face)
face = np.expand_dims(face, axis=0)
# pass the face through the model to determine if the face
# has a mask or not
(mask, withoutMask) = self.model.predict(face)[0]
# determine the class label and color we'll use to draw
# the bounding box and text
label = "Mask" if mask > withoutMask else "No Mask"
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
# include the probability in the label
label = "{}: {:.2f}%".format(
label, max(mask, withoutMask) * 100)
# display the label and bounding box rectangle on the output
# frame
cv2.putText(image, label, (startX, startY - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
cv2.rectangle(
image, (startX, startY), (endX, endY), color, 2)
return image_to_base64(image)
These two files are necessary for the Syndicai tool to be able to recreate the environment and know which function to use for prediction.
When the GitHub repository with all necessary files is ready, we can proceed to connect it to the Syndicai platform.
In order to that, go to Syndicai Platform, login, click New Model on the Overview page, and follow the steps in the form. As soon as you finish, the infrastructure will start building. You will need to wait a couple of minutes for the model to become Active.
For more information about the model preparation or deployment process go to Syndicai Docs.
You've done it!
Your model is deployed, and your REST API is ready. In order to perform a quick test, just go to copy & paste a sample input script in the Syndicai Platform - model overview page.
{
"image_url": "https://bsmedia.business-standard.com/_media/bs/img/article/2020-07/12/full/1594519569-3012.JPG"
}
Remember that your model needs to be Active in order to work!
If everything works fine, you can now connect the API with any device or service. As an example, you can go to the Showcase page to explore sample implementations.
You had a chance to see how to deploy keras model in minutes. Syndicai allows you to deliver models ready for production without taking care of scalability, monitoring, or security. Just a simple way from the trained model to actual use-case.