Intel OpenVINO: Model Optimiser

Reading Time: 4 minutes

Contributed by Surya Prabhakaran

Surya Prabhakaran
OpenVINO Blog

In my previous article, I have discussed the basics and workflow of the OpenVINO toolkit. In this article, we will be exploring:-

OpenVINO
Intel Edge AI Scholarship Program

What is Model Optimizer?

Model Optimizer is one of the two main components of the OpenVINO toolkit. The main purpose of Model Optimizer is to convert the model to an Intermediate Representation(IR). The Intermediate Representation(IR) of a model contains a .xml file and a .bin file. You need both files to run inference.

Intermediate Representations (IRs) are the OpenVINO Toolkit’s standard structure and naming for neural network architectures. A “Conv2D” layer in TensorFlow, “Convolution” layer in Caffe or “Conv” layer in ONNX are all converted into a “Convolution” layer in an IR. You can find more in-depth data on each of the Intermediate Representation layers themselves here.

Frameworks supported by OpenVINO:-

Configuring the Model Optimizer

To use the Model Optimizer, you need to configure it, configuring the Model Optimizer is pretty straight forward and can be done in the Command Prompt/Terminal.

To configure the Model Optimizer, follow these steps(type the commands in Command Prompt/Terminal):-

  1. Go to the Openvino directory:-

For Linux:- cd opt/intel/openvino

For Windows:- cd C:/Program Files (x86)/IntelSWTools/openvino

I have used the default installation directory in the above command if your installation directory is different then navigate to the appropriate directory.

2. Go to the install_prerequitites directory:-

cd deployment_tools/model_optimizer/install_prerequisites

3. Run the install_prerequisites file

For Windows:-install_prerequisites.bat

For Linux:- install_prerequisites.sh

If you want to configure the model for a particular framework, then run the following command:-

TensorFlow:-

Windows:- install_prerequisites_tf.bat

Linux:- install_prerequisites_tf.sh

Caffe:-

Windows:- install_prerequisites_caffe.bat

Linux:- install_prerequisites_caffe.sh

MXNet:-

Windows:- install_prerequisites_mxnet.bat

Linux:- install_prerequisites_mxnet.sh

ONNX:-

Windows:- install_prerequisites_onnx.bat

Linux:- install_prerequisites_onnx.sh

Kaldi:-

Windows:- install_prerequisites_kaldi.bat

Linux:- install_prerequisites_kaldi.sh

Converting to Intermediate Representation

After successfully configuring the Model Optimizer, we are now ready to use the Model Optimizer. In this article, I will show you how to convert an ONNX, Caffe and TensorFlow to an Intermediate Representation. The conversion of ONNX and Caffe is pretty straightforward, but the conversion of Tensorflow model is a little bit tricky.

Converting ONNX model

OpenVINO does not directly support PyTorch; rather, a PyTorch model is converted to an ONNX format, and then it is converted to an Intermediate Representation by the model optimizer.

I will be downloading and converting “Inception_V1”. You can find other models from this link.

After downloading “Inception_V1” unzip the file and extract it to your desired location. Inside the “inception_v1” directory, you will find “model.onnx” file. We need to feed that file to the Model Optimizer.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “model.onnx” file
  2. Run the following command:-
python opt/intel/opevino/deployment_tools/model_optimizer/mo.py --input_model model.onnx

The above command is run in Linux and I have used the default installation directory, if your installation directory is different then use the appropriate path to “mo.py”.

python <installation_directory>/opevino/deployment_tools/model_optimizer/mo.py --input_model model.onnx

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Converting Caffe Model

The process of converting a Caffe model is pretty easy and analogous to that of ONNX model. The difference is that for Caffe models, the Model Optimizer takes some additional arguments specific for Caffe model. You can find more details in the documentation.

I will be downloading and converting the SqueezeNet V1.1 model.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “squeezenet_v1.1.caffemodel” file
  2. Run the following command:-
python opt/intel/opevino/deployment_tools/model_optimizer/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt

If the file names of “.caffemodel” and “.prototxt” are same, then the argument “ — input_proto” is not required.

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Converting a TensorFlow Model

The TensorFlow models in the open model zoo are in frozen and unfrozen format. Some models in TensorFlow may already be frozen for you. You can either freeze your model or use the separate instructions in the to convert a non-frozen model.

You can use the following code to freeze an unfrozen model.

import tensorflow as tffrom tensorflow.python.framework import graph_iofrozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)

I will be downloading and converting the Faster R-CNN Inception V2 COCO model. You can find other models from this link.

After downloading “Faster R-CNN Inception V2 COCO” unzip the file and extract it to your desired location. Inside the “faster_rcnn_inception_v2_coco_2018_01_28” directory, you will find “frozen_inference_graph.pb” file. We need to feed that file to the Model Optimizer.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “frozen_inference_graph.pb” file
  2. Run the following command:-
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json

The above command is run in Linux and I have used the default installation directory, if your installation directory is different then use the appropriate path to “mo.py”.

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Thank you so much for reading this article, I hope by now you have a proper understanding of Model Optimizer.

Surya Prabhakaran

Get Started with the ADLINK Vizi-Ai Devkit – or evolve what you have today to broadening your edge computing to deliver outcomes.

ADLINK Vizi-AI an EdgeAI Machine Vision DevKit –

The Vizi-AI starter Devkit includes an Intel Atom® based SMARC computer module with Intel® Movidius Myriad X VPU and 40 pin connector, Intel® Distribution of OpenVINO and ADLINK Edge™ software.

Stay in touch

Sign up to our email list to be notified of the latest industry news