Learn how to deploy a face blurring algorithm at scale with Syndicai platform without any configuration and infrastructure setup.

Introduction

Our face is the most fundamental and highly visible element of our identity. People recognize us when they see our face or a photo of our face. According to GDPR, a European Union regulatory law, face images are categorized as sensitive data and need to be protected.

However, protecting visual data is not a trivial thing and we are not really aware of how important it is. Talking about private data we are thinking about GPS and cookies in most cases, while images are not really relevant. For instance, playing with amazing facebook facial filters, nobody care about the fact that those videos are stored somewhere. We only think about nice look at that time ;)

As the amount of data processed increases, we must think about our privacy. From the technological point of view, there are some tools and algorithms that helps to keep data private when used by AI or consumed by marketing platforms. One of them is face blurring algorithm that we will try to explore in the following tutorial.

After going through development, deployment and later integration phase in the following article you will have a basic understanding of how to easily deploy a face blurring model into production.

Let's start!

💡 Explore: If you are interested in the traditional AI model You can also explore the tutorial on how to deploy yolov5 model or how to deploy deoldify model.

Step 1: Develop a super resolution model

The main goal of the following step is to build and train a model, in our case it is face blurring algorithm, and later upload the code on GitHub.

The idea of the algorithm is to anonymize the face by blurring it, thereby making it impossible to identify the face. Such an algorithm could be applied for privacy and identity protection in public/private areas, protecting children online, photo journalism and news reporting and many more. A model takes image or video with people as input, recognize faces and draw a blurred rectangle on the face so that the person is hard to recognize.

deploy a face blurring model
Face Blurring input image (left) and output (right) | 2019 Oscars Winners

In the following tutorial we will use the implementation written in OpenCV by Adrian Rosebrock that uses Gaussian blur. The whole pipeline is pretty straight forward. First we need to perform facial recognition, later crop the space with the face, apply blur and finally store the blurred face back in the original image.

You can either follow steps in the tutorial or use a ready-made code to run a model.

Since we don't have to train anything, our model is ready to go. We just need to upload the code to your git repository before we go the next step.

Step 2: Deploy a super resolution model

Model is trained so in the next step we will prepare that model and connect repo to the platform.

AI model deployment is highly dependent on the use-case. In our tutorial we will deploy a face blurring model using Syndicai platform that allows us easily deliver our model to production in a secure and scalable way.

💡 Explore: Check out the article about AI model deployment if you want to learn about different types of AI model delivery to production.

Prepare a model

Our model is already trained and uploaded to the Git repository. Now, we need to somehow define how the model will interact with input / output data when deployed as a webservice.

However, we will not create a webservice, because Syndicai will do it for us. The only thing that we need to do is to create additional syndicai.py and requirements.txt files placing them in the main directory at the end.

The first file, syndicai.py, is the python script that consists of PythonPredictor class. It is responsible for taking the input, parsing through the model and sending response.

In this case, both input and output are in the base64 format, and the content of the file looks as follows.

import os
import io
import base64
import cv2
import numpy as np

from PIL import Image
from imageio import imread
from pyimagesearch.face_blurring import anonymize_face_pixelate
from pyimagesearch.face_blurring import anonymize_face_simple


args = {
"face": "./face_detector",
"method": "simple",
"blocks": 20,
"confidence": 0.5
}

class PythonPredictor:

def __init__(self, config):
# load our serialized face detector model from disk
print("[INFO] loading face detector model...")
prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
weightsPath = os.path.sep.join([args["face"],
"res10_300x300_ssd_iter_140000.caffemodel"])
self.net = cv2.dnn.readNet(prototxtPath, weightsPath)

def predict(self, payload):
# load the input image from disk, clone it, and grab the image spatial
# dimensions
img = imread(io.BytesIO(base64.b64decode(payload["base64"]))) # numpy array (width, hight, 3)
image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
orig = image.copy()
(h, w) = image.shape[:2]

# construct a blob from the image
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
(104.0, 177.0, 123.0))

# pass the blob through the network and obtain the face detections
print("[INFO] computing face detections...")
self.net.setInput(blob)
detections = self.net.forward()

# loop over the detections
for i in range(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# detection
confidence = detections[0, 0, i, 2]

# filter out weak detections by ensuring the confidence is greater
# than the minimum confidence
if confidence > args["confidence"]:
# compute the (x, y)-coordinates of the bounding box for the
# object
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")

# extract the face ROI
face = image[startY:endY, startX:endX]

# check to see if we are applying the "simple" face blurring
# method
if args["method"] == "simple":
face = anonymize_face_simple(face, factor=3.0)

# otherwise, we must be applying the "pixelated" face
# anonymization method
else:
face = anonymize_face_pixelate(face,
blocks=args["blocks"])

# store the blurred face in the output image
image[startY:endY, startX:endX] = face

image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
img = Image.fromarray(image)

im_file = io.BytesIO()
img.save(im_file, format="PNG")
im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")

return im_bytes

The second file, required to recreate the model environment. It consists of a list of libraries and their versions.

opencv-python==4.2.0.34
numpy==1.18.4

Connect a git repo

When all necessary files are placed in the repo we are ready for the actual deployment.

You can use your own repository for this, or the one with the Syndicai's sample models that already contains a prepared face blurring model.

In order to connect your repo just go to https://app.syndicai.co/, login, click New Model on the Overview page, and follow the steps in the form.

syndicai add model
Connect your git repository in order to deploy a model on Syndicai

As soon as you finish, you will notice that the infrastructure will start building. You will have to wait a few minutes for the model to become Active.

syndicai run a model
Face Blurring algorithm deployed on Syndicai

If you see the blue badge Active next to the name of your model it means that your model is deployed to production in the scalable way!

As you could notice using Syndicai does not required you to create a webservice, build a docker, setup any configuration files for the infrastructure.

It's amazing, isn't it?

📚 Learn: For more information about the model preparation or deployment process go to Syndicai Docs.

Step 3: Integrate a model

After the model has been deployed, you can access it using the REST API. This can be done via the platform or the terminal.

In order to test it out quickly on the Platform, go to the model Overview page and paste the sample input script in the Run a model section.

{
"base64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEB..."
}

Remember that your model has to be Active in order to work! As the output, you should also get base64 with blurred images.

If you get a correct response you are ready to go with the model integration. Go to the model Integrate page and use the code snippet to implement the REST API in your website, mobile app, or some platform.

For the purpose of this tutorial we have already created a template with a sample React app. It allows you very easily interact with your deployed model. You can try it on the Syndicai Showcase page and get a feeling of that great experience.

face blurring model run in a simple way
Face Blurring sample demo | showcase.syndicai.co

In addition, you can fork the repository with the showcase page, because the whole code is open-sourced.

Conclusion

In summary, in the above tutorial you had a chance to deploy a face blurring algorithm on the syndicai platform without any infrastructure setup and webservice configuration.

The main goal of that tutorial was to show you a faster and simpler way of AI model delivery to production in the scalable way.

* * *

If you found that material helpful, have some comments, or want to share some ideas for the next one - don't hesitate to drop us a line via slack or mail. We would love to hear your feedback!

You might like these

Deploy yolov5 model in a few simple clicks

Tutorial
February 2, 2022
by
Michał Zmysłowski

Train yolov5. A quick guide from a model to the actual use case.

Tutorial
February 2, 2022
by
Michał Zmysłowski