,

Industrial Strength IoT: Using the Vizi-AI on balenaCloud

Reading Time: 8 minutes

Blog by David Tischler balenacloud Developer Advocate

Balena Vizi-AI
balenacloud Vizi-AI Adlink Edge – how easy!

At balena, we build a product called ‘balenaCloud’ that helps IoT project owners manage and maintain fleets of connected IoT devices.  Our customers, who are the device owners, have IoT projects of all shapes and sizes, and have fleet counts varying from just a handful of devices, to many thousands of IoT endpoints.  Some customers have fleets that are centralized in just one location, and others have fleets that are distributed around the world.  The balena platform supports devices ranging from the  $5 Raspberry Pi 0 to custom-built AI systems costing thousands of dollars.  But no matter the price of the IoT device, the purpose of the platform is to allow Fleet Owners to scale from 1 to N number of devices seamlessly.  As a Developer Advocate at balena, I have witnessed some amazing successes, while also gaining some valuable lessons learned when things don’t go according to plans.  

Prototyping IoT on a Raspberry Pi

Thanks to it’s low price and global availability, the Raspberry Pi has been repurposed from it’s intended role as an educational tool into many different types of applications.  With excellent connectivity and its GPIO pins, it makes a great device to prototype IoT hardware and software on.  But when it is time to move from prototype to production, more specialized, purpose built hardware is normally required.  Because there is no onboard storage on a Raspberry Pi, it requires an SD Card to be inserted, which contains the boot process and operating system.  SD Cards can be unreliable, and have high failure rates in the field, especially under heavy read/write and higher temperatures.  This is less than ideal for IoT devices.  And speaking of temperature, that is a second issue that needs to be addressed in IoT use-cases.  The Raspberry Pi is a consumer device, and is not engineered to survive in extremely hot or cold conditions, such as the outdoor locations that are typical in IoT deployments.  A Raspberry Pi was never designed to survive on a street light during the winter in Buffalo, or the summer in Phoenix.

Enter the iPi -Vizi-AI

The iPi comes in two variants, a Rockchip PX30 quad-core Arm SoC version, offering low power consumption, or the Vizi-Ai version, an Intel Atom E3940 version that also includes an onboard Intel Movidius NPU for AI acceleration.  Both versions contain onboard eMMC storage and offer extended temperature range support, as well.  We’re focusing on the Atom E3940 version here today, because AI at the Edge is a rapidly evolving use-case for IoT devices, and balena can help scale AIoT projects easily.  

balenaCloud

As mentioned, balena builds a management platform for IoT devices.  It is an end-to-end solution, consisting of a base operating system derived from Yocto, a Docker container runtime based on Moby, and a web-based dashboard to interact with devices.  Workloads for the devices are placed in containers, and those containers are pushed over-the-air to the devices, no matter where they are located, so long as they have connectivity of some sort.  This makes it easy to deploy, re-deploy, and continue to update applications on devices.  The Dockerfile for containers can be built locally, or, can be built on the balenaCloud servers for x86 or Arm devices.

Let’s Get Started

Because we are using the Atom version with the built-in Movidius VPU, naturally I wanted to try out some AI demos!  And the quickest way to do that is to use the excellent Movidius App Zoo repo available on GitHub, located here:  https://github.com/movidius/ncappzoo. There you can find image recognition, classification, segmentation, and even a lane-detection Lego Mindstorm v3 project.  These projects make use of the OpenVINO toolkit from Intel, which is their distribution to accelerate deep learning and inference for computer vision and AI projects.  The Intel OpenVINO toolkit works on Intel processors, accelerators, VPUs, FPGA, and integrated GPUs to boost performance of these workloads.

However, most of these demo’s require a GUI, and balenaOS is by default a command-line only OS, so the quickest way to get a desktop installed is to use a ready-made container that has everything configured, as described here:  https://www.balena.io/blog/running-a-desktop-manager-with-balena/. That blog post also describes the basic onboarding steps for balenaCloud, so I won’t cover them here, but the quick version is that you will create a free account, create an ‘Application’, add a device to your Application, download an OS, flash the OS to a USB stick, and boot your iPi from USB.  It will install balenaOS to the eMMC (warning – anything you have on the iPi eMMC will be overwritten!), and provision itself up to your balenaCloud account.  

Following the rest of the blog instructions, I quickly and easily had my desktop running, so now it’s time to install the App Zoo.  That process is documented quite well on the GitHub repo, but the basic instructions are:

 – Clone the Repo

 – Run `.install_deps.sh`

 – cd into desired directory (project), run `make all` 

Build your IoT Project

The first thing I checked out was the ‘Birds’ app, which does species  detection on a set of images in the directory.  It comes with a few sample photos, and as you can see here I was making quick work of locating ducks.  

Next, I hooked up a USB Webcam to the iPi, grabbed some bananas that are a bit past their prime, and had a look at the ‘Simple Classifier Py Camera’ app.  This one does live object detection from the camera, which worked quite well, and was extremely responsive.  

So, this was quite a success:  I had fully functional AI models running, all within just a few minutes from start to finish.  Because this was a quick test I deployed the container and *then* installed and ran my AI workloads, but a better idea is to script the install process into the Dockerfile, and then you can deploy the container to a whole fleet of iPi’s at once.  You can also re-deploy new, improved containers as your AI models improve, you add new features, or you optimize your code.  The scalability of balenaCloud really shines in that scenario.

ViZi-AI

The App Zoo models and the Intel OpenVINO toolkit are great building blocks, but ADLINK also provides a software stack for application management, model management, and camera streaming software, called ADLINK Edge.  The ADLINK Edge software is nicely integrated with Azure, or can also be installed locally on an Ubuntu machine running Docker, with connected ViZi-AI devices performing inferencing and broadcasting their results over RTSP.  The full installation and setup process is very well documented here:  https://goto50.ai/?p=497

As balena is already designed to run Docker containers, it was a quick lift to take the output from ADLINK Edge’s native Docker Compose export functionality (info located here:  https://goto50.ai/?p=1236) and get it running on my balena-powered Vizi-AI.  To build containers and push them to fleets of devices, balena offers a CLI utility that processes Dockerfiles and deploys the resulting container onto devices.  This CLI also supports Docker Compose files, so, after a few tweaks to compensate for the expected docker-compose v2 schema and re-defining mounts, my resulting Docker Compose looks like this:

“`

version: “2”

volumes:

  portainer_data:

services:

  openvino-engine:

    build:

      context: openvino-engine

      dockerfile: Dockerfile.template

    container_name: openvino-engine

    restart: always

    devices:

      – “/dev:/dev”

    network_mode: host

    privileged: true

  aws-model-streamer:

    build:

      context: aws-model-streamer

      dockerfile: Dockerfile.template

    container_name: aws-model-streamer

    restart: always

    network_mode: host

  portainer:

    build:

      context: portainer

      dockerfile: Dockerfile.template

    container_name: portainer

    restart: always

    volumes:

      – portainer_data:/data

    network_mode: host

  mongodb:

    image: docker.io/library/mongo:latest

    container_name: mongodb

    restart: always

    network_mode: host

  stream-viewer:

    build:

      context: stream-viewer

      dockerfile: Dockerfile.template

    container_name: stream-viewer

    restart: always

    network_mode: host

    privileged: true

  frame-streamer:

    build:

      context: frame-streamer

      dockerfile: Dockerfile.template

    container_name: frame-streamer

    restart: always

    network_mode: host

  model-manager:

    build:

      context: model-manager

      dockerfile: Dockerfile.template

    container_name: model-manager

    restart: always

    network_mode: host

  training-streamer: 

    build:

      context: training-streamer

      dockerfile: Dockerfile.template

    container_name: training-streamer

    restart: always

    network_mode: host

“`

The next step in this process is going to create these new containers and replace my existing workload (Desktop GUI and App Zoo), so I put away my bananas at this point.  From the command line, I simply ran `balena push ADLINK-x86` (replace the ‘ADLINK-x86’ with the name of your Application), and follow along as the build progresses in the cloud.  For more detailed instructions on setting up the balena CLI tooling, check out the documentation located here:  https://www.balena.io/docs/reference/balena-cli/

Once this is finished, you will see Charlie the unicorn indicating success!

At this point, we can head over to balenaCloud to see the containers being pushed to the iPi: 

Once they have all finished downloading, and begin running, your ViZi-AI install is now connected and the data river consists of both the Vizi-Ai and the host machine (my Ubuntu VM in the screenshots above).  The Vizi-AI defaults to the sample video being rendered, the inferencing is done by the onboard Movidius accelerator, and the results are streamed over RTSP.  You can view the output of that stream directly in the ViZi-AI application on the host machine, by navigating to the ‘Vision’ tab, click on the Stream, and it will load.  In this screenshot, I actually opened it using VLC Media Player just to see if that would render it as well, which it sure did.

So, like the first example, we have only deployed one Vizi-AI here, but the ability to scale to any number of devices and run the same workload on all of the Vizi-AI’s, bulk update them, push containers to them no matter where they are located, and tunnel back to them via SSH are some of the most powerful features of balenaCloud.  When combined with the Vizi-AI hardware, you can overcome the challenges and hardware limitations of the Raspberry Pi, and quickly move your IoT deployment from prototype to production.

Summary

These quick exercises are of course not the most sophisticated, but they show the power of the ADLINK Edge Platform and Vizi-AI combined with balenaCloud, and you can begin to understand the difference in hardware capability between a Raspberry Pi (or similar board) and an enterprise-grade solution fully designed for IoT deployments.  Whether it’s environmental sensors, industrial or machine monitoring, computer vision applications, robotics with ROS2 and Cyclone DDS, or other deep learning at the edge, the Vizi-AI and iPi can play a key role in scaling your IoT project.  You can purchase the Vizi-AI here, and be sure to take a look at balenaCloud here

Check out a recent Youtube live cast with Paul Wealls ADLINK Vizi-AI Product Manager and GOTO50.ai Community Manager BALENA Happy Hour

Stay in touch

Sign up to our email list to be notified of the latest industry news