Getting Started with NVIDIA Jetson Nano and ADLINK Edge Profile Builder

Reading Time: 6 minutes

The NVIDIA® Jetson Nano Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing.

ADLINK Edge Profile Builder makes it easy to manage and deploy apps on to your Jetson Nano, it helps to keep your apps up-to-date and provides you with the vision tools to view inferencing results in real time.

To set up your Jetson Nano refer to NVIDIA’s step-by-step guide: https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit

Update Docker Configuration

Login to Linux on the Jetson Nano and open a terminal, enter the following to edit the ‘daemon.json’ file:

sudo apt install nano
sudo nano /etc/docker/daemon.json

Before the “runtimes” section, add “default-runtime”: “nvidia”, the file should look like this:

 {
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "1m",
        "max-file": "10"
    },
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "runtimeArgs": [],
            "path": "nvidia-container-runtime"
        }
    }
}

To ensure, the changes take effect, restart docker using the following command:

sudo service docker restart

To Install the ADLINK Edge platform follow the steps in I want to install ADLINK Edge Platform on Ubuntu 18.04 and Jetpack.

To install ADLINK Edge Profile Builder please follow the instructions for the operating system on your host machine:

Note: The discovery mechanism that enables Profile Builder to discover ADLINK Edge devices in the network is based on the Simple Service Discovery Protocol. You must ensure you have multicast enabled in your network.

Register the Jetson Nano device

  1. To open Edge Profile Builder, go to http://localhost:8082.
  2. Open the Devices tab, the Jetson Nano shows as an Unregistered device:
  1. Click on the ellipsis (…) and select Register device.

The Register a device dialog appears. When you register a new device, you can either register the device with Azure IoT Hub or deploy directly to the device across the Data River within your local network. For more information about Azure IoT Hub refer to Setting up an Azure IoT Hub to use with Profile Builder.

  1. For the purpose of this guide, select Local registration and click Next.
  1. Add a device alias, this is the name you want to call your Jetson Nano within the network. Then click Next to continue.
  1. In the next step you can deploy a local profile or deploy from a template, as we need to configure the template before we deploy we can skip this step for now, click Skip this step. The device is successfully registered.

Create a Deepstream Project and Download the Profile Template

  1. Within Edge Profile Builder select the Projects tab and click Create project.
  1. Enter a title and description for the project and click Create.
  1. Select the project and click Add profile.
  1. Select Download a profile and click Next.
  1. From the drop-down, select ‘ADLINK-Templates’, locate and select ‘nvidia-deepstream’ and click Next.
  1. Ensure the correct project is selected and click Download.
  1. When the download is complete, click Finish.

Add an AI Inferencing Model

Download the NVIDIA People Detection AI model zip file from https://downloads.goto50.ai/deep-stream/models.zip on your host PC and unzip the files. You can ftp the ‘models’ folder onto the root of your jetson-nano device so that all three files are in the ‘/models/peoplenet’ directory or use the deepstream app to add the files as follows:

  1. Within the relevant project, select the ‘nvidia-deepstream’ profile and then select the ‘deepstream’ application.
  1. Select the Files tab, click Create new folder, enter the title ‘models’, click Create.
  1. Select the models folder and click Create new folder, enter the title ‘peoplenet’ and click Create, select the peoplenet folder and click Upload file.
  1. Click Choose file, browse to the unzipped models folder, open the peoplenet folder and select the ‘config_infer_primary_peoplenet.txt’ file and click Open and then click Upload. Repeat this step for the ‘labels.txt’ file and the ‘resnet34_peoplenet_pruned.etlt’ so that all three files are uploaded.
  1. For this model to work, you must update the configuration, click the Configuration tab and click Edit as XML, copy and paste the following, overwriting the existing configuration, or download the XML file to copy the content:
<?xml version="1.0" encoding="UTF-8"?>
<VideoAnalyticsPipeline xmlns="http://www.adlinktech.com/vortex-edge/0.9/DeepStream">
  <Application>
    <Id>aea-deep-stream-xavier</Id>
    <ContextId>nvidiaDemoStream-xavier.peoplenet</ContextId>
    <Description>An application that runs an Nvidia DeepStream pipeline using ADLINK Data River sources and sinks. This example configures a pipeline with a resnet model.</Description>
    <LogLevel>Debug</LogLevel>
  </Application>
  <Inputs>
    <V4L2Source>
      <Enabled>true</Enabled>
      <Name>v4l2src</Name>
      <Device>/dev/video0</Device>
    </V4L2Source>
  </Inputs>
  <Muxer>
    <Name>muxer</Name>
    <BatchSize>1</BatchSize>
    <Width>640</Width>
    <Height>480</Height>
    <BatchedPushTimeout>400</BatchedPushTimeout>
    <NvBufMemoryType>default</NvBufMemoryType>
  </Muxer>
  <PrimaryInference>
    <Name>primary-inference</Name>
    <Enabled>true</Enabled>
    <ConfigFilePath>/models/peoplenet/config_infer_primary_peoplenet.txt</ConfigFilePath>
    <UniqueID>1</UniqueID>
  </PrimaryInference>
  <Tracker>
    <Name>tracker</Name>
    <Enabled>false</Enabled>
    <Width>640</Width>
    <Height>384</Height>
    <BuiltInTrackerLib>KLT</BuiltInTrackerLib>
    <EnableBatchProcess>true</EnableBatchProcess>
  </Tracker>
  <Overlay>
    <Name>overlay</Name>
    <Enabled>true</Enabled>
    <DisplayText>true</DisplayText>
    <DisplayClock>false</DisplayClock>
  </Overlay>
  <Outputs>
    <DataRiverInferenceOutput>
      <Name>inference-sink</Name>
      <Enabled>true</Enabled>
      <FlowId>nvidiaDemoStream-xavier.peoplenet</FlowId>
    </DataRiverInferenceOutput>
    <DataRiverVideoFrameOutput>
      <Name>overlay-sink</Name>
      <Enabled>true</Enabled>
      <FlowId>nvidiaDemoStream-xavier.peoplenet.overlay</FlowId>
    </DataRiverVideoFrameOutput>
  </Outputs>
</VideoAnalyticsPipeline>

In the XML above:

  1. Click Save changes and then click the Docker tab. Update the docker settings as follows:
{
    "NetworkingConfig": {
        "EndpointsConfig": {
            "host": {}
        }
    },
    "HostConfig": {
        "Binds": [
            "/models:/models"
        ],
        "NetworkMode": "host",
        "LogConfig": {
            "Type": "json-file",
            "Config": {
                "max-file": "10",
                "max-size": "1m"
            }
        },
        "Devices": [
            {
                "PathOnHost": "/dev/video0",
                "PathInContainer": "/dev/video0",
                "CgroupPermissions": "rwm"
            }
        ]
    }
}

The “Binds” section gets the “/models” directory of the underlying Jetson Nano operating system mounted into the DeepStream App container at the same location. The “Devices” section enables the DeepStream App to utilise the webcam connected to the Jetson Nano and mount it in the container.

  1. Click Save Changes and then Close.

Deploy the Profile to the Jetson Nano

  1. Within the “nvidia-deepstream” profile click Deploy.
  1. Select Deploy directly to a device and click Next.
  1. Select the Jetson Nano device and click Deploy.
  1. A banner appears when the deployment is successful, click Close.

View the Inference Results

To view the results you must first install VMLINK, refer to one of the following guides:

Start VMLINK, in the Streamer section, click LAUNCH, this shows the ADLINK Frame Streamer for nvidiaDemoStream-xavier.peoplenet.overlay, click CONNECT.

This shows the live camera feed from the webcam with the overlaid inferencing result showing people, faces and bags.

Stay in touch

Sign up to our email list to be notified of the latest industry news