Deploy yolov5 in a few simple clicks

Michał Zmysłowski
deploy yolov5

Part 2 of the two-part series that shows you how to deploy yolov5 which is already trained.


This is a second part of the series in which we show you how to take your AI startup from zero to one. In the previous blog post, we’ve learned how to train yolov5 model designed for the Object Detection task. We have a GitHub repository with an AI Model and weights ready.

In this part, we will learn how to deploy YOLOv5 model on production. By the deployment, we mean that you will be able to connect your model to any service and device as a result. Concretely, you will have the REST API connected to a model on a cloud. 

Deployment Process

The Machine Learning Operations process usually consists of creating a web service with Flask, recreating the environment in Docker, setting up an infrastructure, and deploying the model to a cloud provider like Google Cloud or AWS. 

deep learning model deployment
Comparison of AI model deployment between traditional approach and Syndicai approach

Each of these steps requires highly technical knowledge. Even if you succeed after a couple of hours to deploy an AI model, you will most likely find that it will work terribly on production. Your solution won’t be scalable to the increased traffic and the costs of the cloud will not be optimized.   

In this tutorial, I will show you how to streamline this process and deploy a PyTorch model with one tool called Syndicai in a few simple clicks.

Model preparation

In this section, we will prepare a trained YOLOv5 model to connect it to the Syndicai platform later. In case you missed it, we showed how to train a YOLOv5 model in the previous blog post.


The first thing we will need is the weights of the model. We saved them in the “” file. You need to upload them to a service that will allow you to easily and safely download them later using Python code (e.g. Google Drive, Dropbox). For purpose of the tutorial, I will use the weights posted by the authors of the YOLOv5 model:

GitHub repository

Now, we need to create a GitHub repository with your AI model. Normally, we would start a new project, but since the model is already on GitHub, we can fork the repo using the “Fork” button in the top-right corner. This will copy all the code to our account. 

We already have the repository, but we have to spicy it up a little bit so that the Syndicai platform can recognize how it works:


The first file – requirements.txt – is already in the forked repository. It contains all the required packages that need to be installed for the model to work.

# base ----------------------------------------

# logging -------------------------------------
# wandb

# plotting ------------------------------------

# export --------------------------------------
# coremltools==4.0
# onnx>=1.8.0
# scikit-learn==0.19.2  # for coreml quantization

# extras --------------------------------------
thop  # FLOPS computation
pycocotools>=2.0  # COCO mAP

Next, you will need a file – – containing a class PythonPredictor with one function “predict” and constructor. As the name suggests this will instruct the Syndicai platform on how to make a prediction. 

The constructor – __init__ – takes one argument config. The argument is not important for us right now but is necessary nevertheless. The constructor is also the best place to initialize your weights:

def __init__(self, config):
    urllib.request.urlretrieve("", "")

Next, let’s take a closer look at the predict function. It has one required argument payload which is a dict. The REST API takes a JSON file which is later transformed into a dict that goes into the “predict” function.

Example input:


Since we are operating with images, we can encode them using base64 format to send it to the model but we could also input a link to an image.

Regarding the output of the predict function, it should be the

We removed unnecessary parts from the script from the original repo and left only the code responsible for detecting an object on an image. We display only a part of the code responsible for unpacking input and packing the output. You can find the full code in the repo:

 def predict(self, payload):
    """ Model Run function """

    im =["base64"])))'image.png', 'PNG')

    -- Model specific code --

    img = Image.fromarray(im0)

    im_file = io.BytesIO(), format="PNG")
    im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8") 

    return im_bytes


The GitHub repository is ready to be connected to the Syndicai platform. The only thing to do is to register on the Syndicai website, log in and click “Add model”. You will then need to copy the link to your GitHub repository and name the project. As soon as you finish these steps, the platform will start deploying the model. 

deploy a deep learning model
Connect a git repo to deploy a model

To monitor the status of the deployment process go to the “Builds” section. You will need to wait a couple of minutes until you see the “Active” status next to your model name. 

To test out the integration, go to the “Overview” section and add your input in the form below “Run a model”:

run a yolov5 model
Run a REST API request


In this tutorial, we’ve learned how to deploy yolov5 model fast using the Syndicai platform. We were able to streamline the process that would normally require specialized DevOps. Moreover, the solution is scalable, secure, and cost efficient. This two-part series of articles has allowed us to go from zero to one.