How to use the ADLINK model manager and Models

Reading Time: 7 minutes

Head of Engineering and Architecture for ADLINK Technology

The ADLINK model manager enables users to upload models that can be installed on discovered inferencing engines. When a model is uploaded, it is stored in the selected model manager on a local data river instance.

Uploading a model

A model can be uploaded from the Vision tab of the ADLINK Edge Profile Builder. The Vision tab is separated into two sections: Discovered inference engines and discovered model managers.

To get us started we have provided 3 models that you can download and use to experiment with your Vizi-AI:

Model NameModel DescriptionModel Location
Face DetectionThis model detects faces and is best used when
operating a USB web cam with your Vizi. Instructions for configuring your Vizi
to use a web cam can be found here (https://goto50.ai/tutorial/How-to-show-a-web-cam-output-instead-of-a-video-on-Vizi-AI)
https://vizi-ai-ppa.s3.eu-west-2.amazonaws.com/models/openvino/face-detection.tar.gz
Frozen Inference GraphThis is the model that Vizi-AI provides you out of the box.
It is an object detection model
https://vizi-ai-ppa.s3.eu-west-2.amazonaws.com/models/openvino/frozen_inference_graph.tar.gz
Pedestrian DetectionThis model is a pedestrian detection model, you
can use this either with your webcam or the generic video we provide you with.
https://vizi-ai-ppa.s3.eu-west-2.amazonaws.com/models/openvino/pedestrian-detection-adas-0002.tar.gz

Getting Started

Here is a short video that details how to work with the model manager, for a more in depth step by step guide I would suggest reading this article in full below.

To get started ensure you are in the Vision tab and click the Green ‘+’ icon below.

This will launch our Upload Model dialogue, provided you have followed the instructions on packaging your model correctly:

To get started, select the default “template-model-manager” and select to choose a model to download stored within your file system. We have provided you with 3 default models located at the top of this guide.

Navigate within your file system to find the model that you wish to add to your model manager. Select it and click open.

The next attribute to set is the Precision mode. This should be set to the precision mode that was used when the model was optimized in your tool of choice, or your suppliers’ tool of choice.

Typically, when working with 3rd party models this information is not necessarily apparent. To find this information please see the next section.

To find the details needed to add your model and use it successfully, browse to where your model is stored using a file explorer and open it.

If you have structured your model correctly you will see a .tar file.

Double click to open the .tar file.

Inside this tar file are a series of files that the model and the OpenVINO engine utilize. To find the Precision type we require, open the .xml file

Located within this file is the precision type. In this case FP = Float so FLOAT32.

In the Upload Model Dialogue, you will also be required to fill in the “labels file” and “Name (Model ID)”, both of which can be found here “coco_labels.json” and “frozen_inference_graph”..

It is very important to spell the labels file correctly and ensure you enter the correct name for the model files (minus any files extension).

As you see from the example here and compared to the image above I have entered the information correctly.

Click “Upload” to finish and upload the model to the model manager.

It is not uncommon for the screen to appear to do nothing whilst it uploads your file to the model manager when uploading larger files, this issue will be resolved in a future release.

Once this has successfully uploaded you will see it appear on the right-hand side of your screen and a small status bar appear temporarily at the bottom of the screen.

As we are already running the frozen_inference_graph model on the OpenVINO engine we should add a second model to show how to apply a new model to the engines inference stream.

Again, click the green ‘+’ in the top right-hand corner of the screen and you will be presented with the Upload Model dialogue. Select the default model manager “template-model-manager” and choose to add your file.

For this example we will utilize the pedestrian-detection model. Select the model as shown below and click open.

As previously shown above all of this information can be found within your model .tar.gz file. Enter the details shown below and click “Upload”.

You will now see your pedestrian detection model added to the model manager.

To apply your pedestrian detection model, use your mouse to drag the model from the Model Manager to the engine inference stream you wish to apply it to.

You will then be prompted to provide some information that the OpenVINO engine will use to run the model:

  1. Stream – which stream do you want to apply this model too, in this instance there should only be one stream available but in future there could be several.
  2. Hardware – The options within this category are below, but we recommend using AUTO where possible.
    1. AUTO
    2. CPU
    3. GPU
    4. MYRIAD
  3. Threshold – This is the confidence threshold of the model that must be matched or exceeded to produce an inference result, the default is set to 0.5, the scale for this is 0-1
  4. Reload  – This is crucial. The default setting is “false” and will simply send the model to the OpenVINO engine but will not tell it to load the model into the engine just storing it for later. Changing it to “true” will provide us with the desired outcome of changing the active model within the engine.

Once you are happy that you have configured this correctly, click “Apply” to apply the model to the engine.

Please note: When uploading the frozen_inference_graph model, it is 60MB and can take a few minutes to upload.

When you have applied your model, the inferencing will stop almost immediately. If you are viewing your stock video within Profile Builder which plays out of the box with Vizi, the video will continue but the inference overlay will stop whilst it waits for the new model to be downloaded.

As you can see the images are continuing but the inference results have stopped.

Occasionally when you have a large model (ie the frozen inference graph we added first), it can take several minutes to download from the model manager and apply it.

To view the status of what is happening we can utilize Portainer.

If you haven’t already setup your instance of Portainer, there are some instructions here to do this (How to use Portainer to debug a Vizi-Ai application)

If you have configured it, simply navigate to the “Device” tab and click on your Vizi which should appear in this list.

Enter the username and password you setup when configuring it previously and click Login.

Click on the local deployment which is on your Vizi-AI.

Select to view the containers area.

The app that we wish to view the logs for is the openvino-engine app, Click the little logs logo in blue highlighted below.

You will now be presented with the logs for this app.

These logs can be long and contain lots of superfluous information. The next section details what you are seeing and what you should be looking for in the logs to understand what is going on within the OpenVINO app.

Once you have seen the “Inference Engine Pipeline has been reinitialized”, reopen Profile Builder and naviagte to the vision tab and you will see pedestrians being detected instead of the object detection that you previously will have seen when using VMLINK before updating your model.

Stay in touch

Sign up to our email list to be notified of the latest industry news