Solution Architecture
The ADLINK Edge AWS Lookout for Vision Solution is comprised of several applications which communicate via the ADLINK Data River. Together these applications ingest video from cameras, perform inferencing on the captured images and output the inference results for other applications to consume.
Optionally, additional ADLINK Edge applications can be added to the solution to enable bi-directional communication with OT devices such as PLCs.
The core applications are:
- Frame Streamer: A range of applications that capture frames from various sources (e.g. cameras and files). The frames are streamed to the Data River for downstream applications to consume.
- Lookout for Vision: Interfaces with the AWS Lookout for Vision Edge or Cloud inference services to perform inferencing on video frames consumed from the Data River. Inference results are published back to the Data River for downstream applications to consume.
- Training Streamer: Captures video frames and associated inference results from the Data River and transfers them to either AWS S3 or an FTP server. Various triggers can be specified to control which frames are captured. For example, the Training Streamer application can be configured to capture frames classified as anomalous and/or having a confidence value below a certain threshold.
- Greengrass Connect: Forwards received Data River samples to other locally deployed AWS IoT Greengrass components or the AWS IoT broker in the cloud using the Greengrass Core IPC service. This application can forward a wide range of sample types including inference results and data from OT devices.
- Node-RED: Provides a browser-based graphical flow editor which allows users to quickly wire together nodes to form an application. Nodes are provided to read and write samples from the Data River. An example dashboard application is included with the solution.
Additionally, the VMLink desktop application forms part of the solution and can be used to:
- View video streams from the Data River with inference results overlaid.
- Configure camera settings (where supported).
- Capture frames to a local folder with the ability to optionally synchronise the captured images to AWS S3.
Setting up the Edge Device
Installing ADLINK Edge Profile Builder
ADLINK Edge Profile Builder can be installed on either the edge device or the PC used for development. To install ADLINK Edge Profile Builder follow the relevant guide below for your OS.
- Ubuntu 18.04 & NVIDIA Jetpack: Installing ADLINK Edge Profile Builder on Ubuntu 18.04 and JetPack
- Windows: Installing or Updating Edge Profile on a Windows PC
Installing VMLink
VMLink is a useful tool for viewing video streams along with associated inference results. It can also be used to collect images for training AI models. VMLink is currently available for Ubuntu 18.04 on x86 and NVIDA Jetpack on ARM systems.
For more information on installing VMLink please see How to Install VMLink on Ubuntu 18.04 and NVIDIA Jetpack
Using the Automated Solution Installer
ADLINK provides an install script (available from here) to assist with the deployment of the ADLINK Edge AWS Lookout for Vision Solution onto the edge device. The installer can be used to automate many of the steps below including the deployment of the following components onto the edge device:
- AWS IoT Greengrass.
- The streamer profile.
- The trained AWS Lookout for Vision model.
- The inference profile.
Deploying AWS IoT Greengrass onto the Edge Device
The steps below document the process for setting up AWS IoT Greengrass on an edge device with support for deploying the ADLINK Edge AWS Lookout for Vision Edge solution.
- Ensure the device meets the AWS Lookout for Vision Edge Device Requirements:
- For NVIDIA Jetson based devices the device must be running JetPack 4.4, 4.5, 4.5.1 or 4.6.
- For x86 based devices the device must run Ubuntu 18.04 LTS.
- Install the required CUDA and TensorRT libraries.
- For Jetson based devices the following command will install the required libraries:
sudo apt install tensorrt cuda-nvrtc-10-2
- For x86 platforms the desired version of CUDA, cuDNN and TensorRT must be downloaded and installed from the NVIDIA Developer Website.
- For Jetson based devices the following command will install the required libraries:
- Install Docker and docker-compose by running the following command:
sudo apt install docker.io docker-compose
- Provision AWS IoT Greengrass on the device:
- Add the ggc_user to the docker group by running the following command:
sudo usermod -aG docker ggc_user
- Authorize the Greengrass Core device to access artifacts in AWS S3:
- Configure the device for running the Lookout for Vision Edge Agent
- Install Python 3.8 by running the following command:
sudo apt install python3.8 python3-pip python3.8-venv
- Grant the ggc_user access to the video device by running the following command:
sudo usermod -aG video ggc_user
- Install Python 3.8 by running the following command:
Download the Required ADLINK Edge Profiles with Profile Builder
Before the ADLINK Edge applications making up the solution can be deployed to the edge device via AWS IoT Greengrass the applications must first be configured using ADLINK Edge Profile Builder.
To assist with this ADLINK provide the following template profiles for download within Profile Builder:
- aws-lfv-edge-basic-streamer
- Includes a video streaming application supporting video files, webcams and RTSP feeds.
- aws-lfv-edge-genicam-streamer
- Includes a video streaming application supporting GenICam compatible cameras including Basler ace cameras.
- aws-lfv-edge-inference
- Includes the ADLINK Edge Lookout for Vision Inference Engine application as well as other applications for reporting the results.
Downloading the Profiles
From within a project in ADLINK Edge Profile Builder click on Add profile and select Download a profile. Click the Next button.
Ensure the ADLINK-Templates repository is selected and select the profiles you wish to download. Click Next.
The selected profiles will now be downloaded.
When the download is complete the profiles will be available to edit in Profile Builder.
Deploying a Streamer Profile
The following sections describe how to configure and deploy a streamer profile to facilitate the collection of images for use in training a model. Before diving into how the profile is configured first the concept of Stream Id will be introduced.
A Stream Id identifies a unique stream of video frames from a camera and as such is a key parameter used in the configuration of several applications to direct the flow of data. It is recommended that Stream Id’s are set in a way that identifies the context of the stream in the system. For example, a stream
from a camera on a machine in an ADLINK factory in Taipei may be assigned a hierarchical stream Id of taiwan.taipei.lineA.machine1.camera1 1.
1 Please note that VMLink does not support multi-segment stream Ids at this time. Multi-segment Ids can instead be represented by replacing . characters with – characters such as taiwan-taipei-lineA-machine1-camera1.
Configuring the Streamer Profile
A Streamer Profile serves two purposes:
- Allows images to be streamed from a camera for the purposes of collecting training images with VMLink.
- Allows images to be streamed to the inference engine application (configured as part of the Inference profile) for live inferencing at the edge.
An example of a Streamer Profile is shown below (aws-lfv-edge-genicam-streamer):
Clicking on an application within a profile opens the configuration editor for that application:
The configuration of an application and the container it will be deployed in can be configured through the Configuration, Files and Docker tabs in the editor. Documentation for the application is also available by clicking on the Documentation tab.
In general, the template streamer profiles are configured with default parameters suitable for deployments consisting of a single camera with the Stream Id pre-configured as camera1. In certain cases, device configuration and Docker settings may need to be updated.
Exporting the Profile
For deployment via AWS IoT Greengrass profiles are exported as a docker-compose bundle. To export a profile from Profile Builder open the profile to be exported and click the Deploy button. Select Download a docker-compose and click Next.
The profile will then be prepared and a download triggered.
Creating a Greengrass Component from the Profile
An exported profile must be uploaded to AWS S3 to a bucket in the same region as the AWS IoT Greengrass deployment. Once uploaded an AWS IoT Greengrass Generic component can be created for the profile which references the file in AWS S3. Example recipes for each of the streamer template profiles are given below:
Note the path to the profile in AWS S3 will need to be modified before creating a component from the example recipes. For the above example recipes the <bucket> and <path> segments of the URI will need to be changed as highlighted below:
For further information on how to create an AWS IoT Greengrass Component from a recipe see here.

Please ensure you have completed the step to allow Greengrass to access the component artifacts in AWS S3. If this step is omitted the Greengrass core device will be unable to deploy the component onto the device.
Deploying the Streamer Profile to a Device
The various camera streamer profiles have no additional dependencies. To deploy the profile add the Greengrass component created in the previous section to a Greengrass deployment targetted at your device.
For further information on how to deploy a component see here.
Collecting Images for Training a Model
Viewing the Camera Stream in VMLink
To view the camera stream first open the VMLink application. The link can usually be found on the desktop and alongside other applications in the Ubuntu 18.04 launcher menu. The icon is shown below:
On the App Select screen click the LAUNCH button below the Streamer application.
As video streams are discovered by the Streamer application they will be added to the Stream Select window. Click the LAUNCH STREAM button below a discovered stream to view that stream.
Video frames will be displayed as they arrive.
Deploying the Inference Profile
The Inference Profile can be deployed once training data has been captured and an initial model trained.
Configuring the Profile
The Inference Profile includes the following applications:
- ADLINK Edge Lookout for Vision Inference Engine
- ADLINK Edge Greengrass Connect
- ADLINK Edge Training Streamer
- Node-RED with ADLINK Data River nodes
Other ADLINK Edge and Docker applications can be added to the profile as required. For example, the ADLINK Edge Modbus Connect application can be added to support connections to Modbus devices. Please note these additional applications may require separate licenses to be purchased.
Lookout for Vision Inference Engine Application
The Lookout for Vision (aws-lookout-vision) application performs inferencing on received video frames. The key fields to configure in the application are:
- SourceStreamId – should be configured to match the StreamId of the streamer application. Wildcard characters can also be used to subscribe to multiple video streams.
- EngineId – an identifier for the inference engine instance. This should consist of a single context segment only. The EngineId is pre-configured to lfv but should be changed if more than one inference engine application will act on the source video stream. It is used to uniquely identify inference result data flows for a stream on the ADLINK Rata River
- Model – should be configured with the details of the Lookout for Vision Edge model to use.
Training Streamer Application
The Training Streamer application captures frames automatically based on configured triggering conditions. The key fields to configure in the application are:
- S3 – location in S3 where captured images should be stored.
- StreamId – should be configured to match the StreamId of the streamer application. Wildcard characters can also be used to subscribe to multiple video streams.
- Triggers – configures the conditions under which frames will be captured.

In the template profile, the Training Streamer application is configured to acquire AWS credentials through the Greengrass Token Exchange Service. In order to access the configured S3 bucket, an appropriate IAM policy must be attached to the token exchange role. More information can be found here.
Greengrass Connect Application
The Greengrass Connect application uses the AWS IoT Greengrass Core IPC service to relay messages from the Data River to other locally deployed Greengrass components or the AWS IoT broker in the cloud.
The key fields to configure in the application are:
- Topic – the name of the Greengrass topic to relay messages to.
- FlowID – the data stream to subscribe to on the Data River. This should match <StreamId>.<EngineId>. Wildcard characters can also be used to subscribe to the inference results for multiple video streams.

In the template profile, the Greengrass Connect application is configured to send messages to the AWS IoT Cloud broker. This requires appropriate access control policies to be set as part of the component recipe or component configuration in a deployment. Such a policy is included in the sample component recipe provided within the Creating a Greengrass Component from the Profile section. More information can be found here.
Node-RED application
The Node-RED application does not need to be configured. By default, it comes complete with an example flow file that receives video frames and inference results from the default camera1 stream and displays these on a local web dashboard accessible at http://<Device IP>:1880/ui. If the flow is subsequently changed by users the flow can be exported and added to the application’s profile by adding the file to /adlinkedge/config/flows.json using the Files tab.
Exporting the Profile
Once the inference profile has been configured it can be exported from Profile Builder as a docker-compose bundle.
Creating a Greengrass Component from the Profile
As with the streamer profile, the exported profile must be uploaded to AWS S3 to a bucket in the same region as the AWS IoT Greengrass deployment. Once uploaded a Greengrass component can be created for the profile using the following example recipe file:
Deploying the Inference Profile
As with the camera streamer profiles the inference profile can be deployed to a Greengrass Core by creating an AWS IoT Greengrass component and adding that component to a Greengrass deployment. However additional Greengrass Components must be installed along with the inference profile component:
- User-defined components:
- Camera Streamer Profile
- Lookout for Vision Model
- AWS defined components:
- Token Exchange Service (aws.greengrass.TokenExchangeService)
- When listed as a dependency (as it is in the example inference profile recipe) the component will be automatically added to the deployment.
- Lookout for Vision Edge Agent (aws.iot.lookoutvision.EdgeAgent)
- Listed as a dependency of the model and so will be automatically added to the deployment.
- Token Exchange Service (aws.greengrass.TokenExchangeService)
Viewing the Inference Results
Viewing the Inference Results in VMLink
In addition to displaying video frames from frame streamer applications, VMLink automatically overlays any inference results associated with the stream on top of the images.
Video streams can be viewed within VMLink using the Streamer application.
Viewing the Inference Results in Node-RED
An example Node-RED Dashboard is included with the inference profile and can be viewed by navigating to the following URL in a web browser:
http://<Device IP>:1880/ui
The dashboard shows the video stream from camera1 as well as the inference results.