, ,

Why and how to use the Edge SDK

Reading Time: 13 minutes

This article describes why and how to use the ADLINK Edge SDK to build and extend your machine vision system. You will be using Vizi-AI as the starting point of your journey. It explains the key advantages of the Edge SDK and takes you through its installation and writing your first App. Let’s start off with the steps that machine vision systems typically comprise of;

  1. Capture – The system automatically captures images by means of one or more cameras.
  2. Process – The system analyses the captured images, often using Artificial Intelligence, to do verification, identification, counting, classification, etc.
  3. Act – The system takes an action based on the outcome of the previous step.

Your up and running Vizi-AI includes the Frame Streamer App. This App streams the images in real-time to the inference engine. It does so by publishing images into the Data River and that delivers step (1). The Intel OpenVINO inferencing App applies an inferencing model on the real-time video. It then publishes the inferencing result back into the Data River. This completes step (2) of the solution.

Finally, Vizi-AI also enables you to observe the inference results overlaid on the video in real-time using the VMLink App that consumes both the images as well as the inference result from the Data River. This enables you to evaluate how your inference model is performing with your own eyes. In other words; it allows you to visually inspect and evaluate how steps (1) and (2) are performing.

The logical next step would be to implement the automatic acting on the output from step (2). This article demonstrates how you can use the freely available Edge Software Development Kit (Edge SDK), so you can develop your own Apps to implement step (3) and complete/extend your machine vision solution.

Why use the Edge SDK?

The Data River is a real-time data sharing platform that provides highly deterministic, fault-tolerant, scalable low-latency data distribution with automatic discovery and built-in security.

The Edge SDK enables you to programmatically get access to all the information that your other Apps are publishing in the Data River and because of that, interact with all other Apps that are using it to share data.

But why should you use the Edge SDK to build your Apps? Let me give you 10 reasons why.

Why #1: High-performing

With latencies as low as 30 µs and a throughput of tens of thousands of messages per second, the Data River is suitable for almost every use case.

Why #2: Open and proven technology

The Data River is based on the open and proven Data Distribution Service (DDS) standard. DDS has been proven in real-time mission-critical systems in a wide range of verticals like; Mil/Aero (GVA, NATO GVA, FACE, SOSA), Healthcare (MD PnP), Robotics (ROS2), Transportation/Automotive (AUTOSAR adaptive), Industrial Automation (IIRA) and more. As a result of that, you ensure your systems will be future-proof and you are not locked in to one vendor.

Why #3: Easy to integrate

The Data River is data-centric as opposed to being connection-oriented as most other data sharing technologies. Connection-oriented technologies focus on connections between communicating endpoints. This requires applications to explicitly connect to other endpoints and ask them to perform certain predefined functions. This requires knowledge about where others are deployed and also what functionality others have implemented. This often means that modifying one application affects other applications as well, making systems more costly and more difficult to evolve and scale.

The Data River focusses purely on the data and its meaning. Apps only need to agree what data looks like and indicate what data they produce and want to consume, i.e. their ‘declaration of intent’. The Data River automatically discovers endpoints and by means of their declaration of intent, knows what communication paths to create to match them with each other. It then takes care of delivery of any data to all interested parties without applications having to indicate which connections to go and create. This all happens dynamically at run-time, without the need to pre-configure any end-points.

The data-centric approach hides all topology details from the application, enabling true plug-and-play. This reduces the complexity of application code, but more importantly, it enables the deployment anywhere in the network without having to update any code or configuration. The same goes when moving them around in the system in the future. This makes it very easy to develop, integrate, evolve and extend systems.

Why #4: Quality-of-Service

The Edge SDK provides so-called Quality-of-Service profiles to enable users to properly characterise the various kinds of data, i.e. telemetry, state, event, video, vibration, etc. The kind of data does not only imply how it needs to be treated when sharing, but also how it impacts the application business logic that produces and/or consumes that (kind of) data.

Take for instance ‘telemetry’ data. This is typically used for sensor data where the most recent value is more relevant than older values. For this kind of data, update rates are usually relatively high and ‘losing’ an intermediate value is not really an issue as a more recent one is already, or soon will be, available anyway. This enables the Data River to choose not to resend an update if it gets lost over the network and/or automatically down-sample when resources (CPU, memory, network) are scarce.

All in all, this ensures highly deterministic behaviour for your solution, also when it is under a high load.

Why #5: Fault-tolerant

All communication within the Data River is peer-to-peer and so eliminates the need for any brokers that most other technologies need.

Why #6: Pre-built capabilities

Due to the data-centric approach of the Data River, any App automatically integrates with all capabilities that already exist today. Many leading technologies have already been integrated in different categories as part of the ADLINK Edge platform. Consider for instance categories like:

By using the Edge SDK, you get access to the growing number of these pre-built capabilities and because of that you can save a lot of time and effort building your solution. This makes your solutions cheaper and allows you to get to market more quickly.

Why #7: Polyglot

The Edge SDK provides language-bindings for C++, Python, Java, Node.JS and .NET Core, therefore enabling you to use your favourite language for development of your App. Apps developed in different languages can co-exist and integrate with each other seamlessly.

Why #8: Secure

The Edge SDK provides built-in security for all Apps, without affecting any application code. You’ll only need to consider this during deployment, enabling you to deploy with the right level of security based on where you deploy. It provides the following security model:

Why #9: Scalable

A lot of data sharing technologies rely on TCP/IP as the underlying protocol. Even though you may think this is fine, it does not always scale well. Once you start to evolve your solution, your data may need to flow from one publisher to multiple subscribers. As TCP/IP is a point-to-point protocol. this means data needs to be sent multiple times.

The Data River uses UDP/IP with its own reliability protocol implemented on top of that. This means you can leverage the power of multi-cast and so deliver data to multiple subscribers while only sending it once.

All in all, this ensures your solution will scale very well. This may not matter initially, but it matters a lot when you start to evolve and scale your solution.

Why #10: Portable

The Edge SDK with its Data River runs on various hardware platforms (ARM, x86) and operating systems (Windows, Linux). New platform support is added all the time. This means you’ll be able to use even more platforms in the future.

How to use the Edge SDK?

As you can see, there are plenty of reasons why you should use the Edge SDK. Let’s see how you can use it. First, you will learn how to set up the Edge SDK. After that you’ll see how to write a new App for your Vizi-AI.

Download and install the Edge SDK

Start by downloading the latest version of the Edge SDK here for the platform of your choice. Simply register, login and download. Please ensure you provide a valid email address as the license key will be emailed to you.

After download, check your email as the license key will be in your inbox. You can store that license key anywhere on your disk.

Instructions going forward apply to Linux

Now run the downloaded installer to start the Edge SDK installation process. On Linux you may need to setup execute rights before being able to start the installation. Open a Terminal window and type:

shell> cd Downloads
shell> chmod 755 ./P822-EdgeSDK-1.4.0-x86_64.linux-gcc7-glibc2.27-installer.run

After that, you can start the installer.

shell> ./P822-EdgeSDK-1.4.0-x86_64.linux-gcc7-glibc2.27-installer.run

The following figure shows the initial setup screen on Linux.

ADLINK Edge SDK setup

While going through the setup, accept the license agreement and choose your installation directory. Select “Yes” when you need to choose whether you would like to install a license file. In the following step, navigate to the license file that you have stored earlier. Now click “Forward” to start the installation. Once the installation process has finished, the Edge SDK is ready to use.

Now open a Terminal and type:

shell> cd ADLINK/EdgeSDK/1.4.0
shell> source ./config_env_variables.com
<<< ADLINK EdgeSDK Release 1.4.0 For Date 2020-04-30 >>>
<<< Vortex OpenSplice RTS Release 6.10.3 For x86_64.linux Date 2019-10-09 >>>

This completes the set up of your shell to use the Edge SDK.

Inspecting the Data River

At this point I am assuming you’ve got your Vizi-AI up and running. You can now use the so-called thingbrowser to see what is available on the Data River.

shell> cd tools
shell> ./thingbrowser

You can discover who is connected (so-called Things) to the Data River throughout your complete network using the thingbrowser. The thingbrowser also allows you to discover what data (so-called TagGroups) is being published (output) and subscribed to (input) by these Things. The image below shows the frame-streamer (running on the Vizi-AI) that is publishing images (VideoFrameData). It also shows information about the connected camera (DeviceInfo) onto the Data River. Finally, it also shows the data structure of the TagGroups as well as their QosProfiles. This is all feasible due to the data-centric approach as described earlier in the Easy to integrate section.

Edge SDK thingbrowser output

You can write your own App and get access to the data published for all TagGroups on the Data River. The Data River will automatically match your interest to the available publishers on the Data River. Once matched, it also ensures you’ll start receiving data automatically. The other way around obviously works as well.

You can find all TagGroups used by Vizi-Ai in a public GitHub repository. This eases the development and integration of your own App(s) with the existing system.

Show me an example

Let’s say you want to act on the inference result produced by the model that is running on your Vizi-AI. The image below shows the output of the thingbrowser. You can see the Intel OpenVINO inference engine output. It shows that the DetectionBoxData TagGroup with its attributes as published by the inference engine.

Intel OpenVINO inference engine DetectionBox output

You can write your application code in your favourite language. The code snippets will be in Java as I have selected that language for this example. You can achieve the same result using C++, Python, .NET or Node.JS though.

Register TagGroup

The associated JSON file for the DetectionBoxData TagGroup looks as follows (or find it here):

[
    {
        "name":"DetectionBox",
        "context":"com.vision.data",
        "qosProfile":"event",
        "version":"v1.0",
        "description":"Inference engine results for object detection model outputing bounding boxes",
        "tags":[
            {
                "name":"engine_id",
                "description":"Inference engine identifier",
                "kind":"STRING",
                "unit":"UUID"
            },
            {
                "name":"stream_id",
                "description":"ID of the stream fed into the inference engine",
                "kind":"STRING",
                "unit":"UUID"
            },
            {
                "name":"frame_id",
                "description":"ID of the input video frame fed to the inference engine",
                "kind":"UINT32",
                "unit":"NUM"
            },
            {
                "name":"data",
                "description":"List of Detection Box Data (the results)",
                "kind":"NVP_SEQ",
                "unit":"n/a",
                "typedefinition": "DetectionBoxData"
            }
        ]
    },
    {
        "typedefinition": "DetectionBoxData",
        "tags": [
            {
                "name":"obj_id",
                "description":"Detected object id",
                "kind":"INT32",
                "unit":"UUID"
            },
            {
                "name":"obj_label",
                "description":"Detected object proper name",
                "kind":"STRING",
                "unit":"UUID"
            },
            {
                "name":"class_id",
                "description":"Detected object's classification type as raw id",
                "kind":"INT32",
                "unit":"UUID"
            },
            {
                "name":"class_label",
                "description":"Detected object's classification as proper name",
                "kind":"STRING",
                "unit":"UUID"
            },
            {
                "name":"x1",
                "description":"Top Left X Coordinate (% from 0,0)",
                "kind":"FLOAT32",
                "unit":"Percentage"
            },
            {
                "name":"y1",
                "description":"Top Left Y Coordinate (% from 0,0)",
                "kind":"FLOAT32",
                "unit":"Percentage"
            },
            {
                "name":"x2",
                "description":"Bottom Right X Coordinate (% from 0,0)",
                "kind":"FLOAT32",
                "unit":"Percentage"
            },
            {
                "name":"y2",
                "description":"Bottom Right Y Coordinate (% from 0,0)",
                "kind":"FLOAT32",
                "unit":"Percentage"
            },
            {
                "name":"probability",
                "description":"Network confidence",
                "kind":"FLOAT32",
                "unit":"Percentage"
            },
            {
                "name":"meta",
                "description":"Buffer for extra inference metadata",
                "kind":"STRING",
                "unit":"N/A"
            }
        ]
    }
]

After that, register the TagGroup in Java by:

final JSonTagGroupRegistry tgr = new JSonTagGroupRegistry();
tgr.registerTagGroupsFromUri("file://./definitions/TagGroup/com.adlinktech.vision/DetectionBoxTagGroup.json");
DataRiver.getInstance().addTagGroupRegistry(tgr);

Register ThingClass

Now that we have our TagGroup, the next step is to create a ThingClass to use in our App. The ThingClass provides our declaration of intent as discussed earlier. Our example refers to the DetectionBox TagGroup as input.

{
  "name": "InferenceMetrics",
  "context": "com.adlinktech.vision",
  "version": "v1.0",
  "description": "ADLINK Edge Vision Inference Metrics",
  "inputs": [{
    "name": "inferenceResult",
    "tagGroupId": "DetectionBox:com.vision.data:v1.0"
  }]
}

Once completed, register the ThingClass in Java, by:

final JSonThingClassRegistry tcr = new JSonThingClassRegistry();
tcr.registerThingClassesFromUri("file://./definitions/ThingClass/com.adlinktech.vision/InferenceMetricsThingClass.json");
DataRiver.getInstance().addThingClassRegistry(tcr);

Create your Thing

Now that we have our TagGroup and a ThingClass, we need to create an actual Thing that acts as an instantiation of the ThingClass.

{
    "id": "_AUTO_",
    "classId": "InferenceMetrics:com.adlinktech.vision:v1.0",
    "contextId": "inferenceMetrics",
    "description": "Edge Vision Inference Metrics Monitor"
}

As a next step, create your Thing in Java by:

 final JSonThingProperties tp = new JSonThingProperties();
tp.readPropertiesFromUri("file://./config/InferenceMetricsProperties.json"); 
this.inferenceMetricsThing = DataRiver.getInstance().createThing(tp);

Reading data

Now that you’ve created your Thing, you’re ready to start reading and processing. The code snippet below shows how you can read and process the DetectionBoxes.

final Selector selector = inferenceMetricsThing.select("inferenceResult"); 
final IotNvpDataSampleSeq msgs = this.selector.readIotNvp();

for (final IotNvpDataSample msg : msgs) {
   String objLabel = "None";
   float probability = 0;
   int objId = 0;

   for (final IotNvp nvp: msg.getData()) {
      if (nvp.getName().equals("data")) {
         for(int i = 0; i<nvp.getValue().getNvpSeq().size(); i++) {
            IotNvpSeq detectionBoxData = nvp.getValue().getNvpSeq().get(i).getValue().getNvpSeq();

            for(final IotNvp boxData: detectionBoxData) {
               if (boxData.getName().equals("obj_id")) {
                  objId = boxData.getValue().getInt32();
               } else if (boxData.getName().equals("probability")) {
                  probability = boxData.getValue().getFloat32();
               } else if (boxData.getName().equals("obj_label")) {
                  objLabel =  boxData.getValue().getString();
               }
            }
            /* Process the data here */
         }
      }
   }
}

Example output

The full example code attached to this page keeps track of the objects that have been detected and the model confidence. It prints a report on the screen every 5 seconds. If you run the example, you will see output that looks like this:

Report inference confidence

Thingbrowser output

Below you’ll see what the example looks like in the thingbrowser. You can see there is only one input. The input consumes the DetectionBox TagGroup that is published by the Intel OpenVINO inference engine. As the App is not publishing any data, there are no outputs.

Edge SDK inference metrics example in thingbrowser

Conclusion

If you’ve made it here, you understand why it makes sense to start using the Edge SDK. You are also able to start writing your own apps and act upon the outcome of the inference engine.

Stay in touch

Sign up to our email list to be notified of the latest industry news