Deploy a DeOldify model at scale in a few simple clicks without setting up a webservice, Docker, Kubernetes, Google Cloud or AWS.
Did you know that in the early 90s photographers and artists, in an attempt to create more realistic images, started hand-color monochrome photographs? Did you know that the first person who tried that technique was Johann Baptist Isenring who produced colored Daguerreotypes, using a mixture of Arabic gum and pigments? Did you know that the best-selling hand-colored photographer of all time was Wallace Nutting - England minister?
These are just a few interesting facts that confirm that the history of hand-coloring was very lush and that the whole technique is more like art and less like a service. However, things changed with the advent of artificial intelligence.
Deep learning algorithms far outperform the old methods (photo below) as they were able not only to color the images but also restore them (in minutes!).
That is why, inspired by such amazing results, we decided to show you how to become an artist yourself. In this tutorial, we want to show you how to implement one of the top photo colorizing algorithm, so that you can later create a web application or demo showcase from it.
Let's start!
💡 Explore: If you are interested in AI model deployment, please check out also a tutorial on how to deploy yolov5 model.
In order to deploy a model, we need to go through three quick steps.
In the first step, we need to train a model, so that inference will be possible. For simplicity, we will use a pretrained version of the DeOldify model introduced in 2018. It is the state of the art algorithm to colorize black & white images that combines Self-Attention GANs and Two Time-Scale Update Rule.
You can see the model input image and output below, and the results are astonishing!
When the model is trained it needs to be uploaded to the GitHub repository. We have already prepared one for you under the following url, so you can just fork it and play with the model.
Weights of the trained model are ready and the code is already on GitHub, so we can jump to the next step.
In the next step we will deploy a model.
When deploying a model you have a couple of options on how to do it properly. In many situations, that decision is highly influenced by the particular use-case. For instance, in our case we just want to perform the simple inference on cloud. The most popular way to do it is by creating a webservice, building a docker, and later setting up the infrastructure on cloud. However, such an approach has a lot of disadvantages like difficult resource management, no versioning, or lack of basic security. Therefore, we will use the solution that significantly simplifies and speed up the whole process - a Syndicai Platform.
💡 Explore: If you are interested in the traditional AI model deployment, please go to deploy ML with flask article.
Apart from putting your model in the GitHub repository, you have to upload two additional files (syndicai.py
and requirements.txt
) in the main directory of your model.
The first file, syndicai.py
, is the main file that consists of PythonPredictor
python class responsible for model prediction. In this case, the models take an URL of the black-and-white image as input and output colorized image in the form of base64.
# This must be the first call in order to work properly!
import io
import base64
import urllib.request
from deoldify import device
from deoldify.device_id import DeviceId
device.set(device=DeviceId.GPU0)
import torch
if not torch.cuda.is_available():
print('GPU not available.')
import fastai
from deoldify.visualize import *
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*?Your .*? set is empty.*?")
url = "https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth"
model_path = "./models/ColorizeArtistic_gen.pth"
class PythonPredictor:
def __init__(self, config):
if not os.path.exists("models/ColorizeArtistic_gen.pth"):
urllib.request.urlretrieve(url, model_path)
self.colorizer = get_image_colorizer(artistic=True)
def predict(self, payload):
render_factor = 19 #@param {type: "slider", min: 7, max: 40}
img = self.colorizer.plot_transformed_image_from_url(url=payload["url"],
render_factor=render_factor)
im_file = io.BytesIO()
img.save(im_file, format="PNG")
im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")
return im_bytes
The second file required by the Syndicai platform is needed to recreate the model environment. It consists of a list of libraries and their versions.
fastai==1.0.51
wandb
tensorboardX==1.6
ffmpeg
ffmpeg-python==0.1.17
youtube-dl>=2019.4.17
jupyterlab
opencv-python>=3.3.0.10
pillow==6.2.2
We are halfway through the journey of our model deployment. Now you will deploy a model by connecting a repo to the Syndicai platform.
In order to connect your repo just go to https://syndicai.co/, login, click New Model on the Overview page, and follow the steps in the form. As soon as you finish, you will notice that the infrastructure will start building. Therefore, you will have to wait a few minutes for the model to become Active.
Seriously that's all?
Yes, indeed. Syndicai creates a webservice, package a model and host it in a serverless way so that you don't have to take care of scalability and model management.
📚 Learn: For more information about the model preparation or deployment process go to Syndicai Docs.
Congrats!
Your model is deployed so now you can access it using REST API either via Platform or via terminal.
In order to test it out quickly on the Platform, go to the model Overview page and paste the sample input script in the Run a model section.
{
"url": "https://i0.wp.com/www.brainpickings.org/wp-content/uploads/2013/05/einstein11.jpg"
}
Remember that your model has to be Active in order to work!
After that, If everything works fine, you can go to the model Integrate page and create the showcase demo. For example, we have created a Colorizer - a react web app that helps you interact with your model in w very simple way. You just paste the url of the image and hit the Run Model button! The whole code is open-sourced so you can fork the repo and play with it.
Colorizer is a demo showcase for DeOldify model
In conclusion, the following tutorial lets you explore the fast and easy way to deploy DeOldify model. You could notice that creating a webserivce, building a docker, and setting up the infrastructure is not necessary to deliver a model in production.