CAPTURING THE CORRECT USE OF PPE
The COVID-19 crisis has led to many headlines about PPE (Personal Protective Equipment). We have also had cancellation of a number of events, including AWS Summits, where we had intended to demonstrate our work around PPE. Our customer Balfour Beatty had a Strategic Supplier Conference, which was also postponed. At this event, we were due to unveil a computer vision demo showing the detection of PPE in construction, e.g. safety jackets and helmets. We built the solution in AWS, working with their leading computer vision partner ADLINK. ADLINK Technology specializes in edge computing hardware and software solutions, with a mission to be a catalyst for industry, powered by AI. We describe in this blog our experience creating the PPE Detection service, with a focus on deploying models to the Edge. Edge computing is the concept of running software directly at the location where it is needed. The global edge computing market is forecasted to reach 1.12 trillion dollars by 2023, according to this Forbes report. We believe the technology is mature and the time is right to roll out these kind of initiatives across the construction industry and others. We believe this work can promote correct PPE usage, and thus reduce the risks of illness and injury.
BUILDING THE COMPUTER VISION MODEL
Our main focus in the project was to design, build and train a computer vision model. For this, we chose to use Tensorflow. Balfour Beatty supplied us a selection of PPE they use. Then we collected our own static images at first to use for training. This initial set of images were labelled with bounding boxes around the three classes of objects: people, hi-visibility jackets and helmets.
Next, we augmented the images by flipping them, cropping, blurring and applying colour adjustments. The aim was to increase the number and vary the quality of training data to simulate demo conditions. This helped to eventually provide a set of a few thousand labelled images. These were used to cross-train a model that was selected for its ability to already recognise people.
RAPID ARCHITECTURE SETUP
We based our AWS Cloud architecture on the Inawisdom Rapid Analytics & Machine-learning Platform (RAMP). RAMP provides a secure and structured analytics working environment very quickly. We mainly used a single SageMaker instance for this project to access the training data from an S3 bucket and develop the model.
The Left hand side of the architecture is an optional extension of RAMP deployed on premises. It shows the ADLINK inference engine and the NVidia-based Neon-i smart camera. This equipment would be installed within an industrial environment in the real world. ADLINK provide the connections between sensors and devices, enabling vision data to flow. Specifically, the solution relies on the ADLINK Edge™ software platform and ADLINK Data River™ technology. The next section covers these components in more detail. Our other component on the AWS side is the model converter, which automatically takes the output from SageMaker and converts it to a run time inference engine that can be deployed. We took a rapid prototyping and iterative approach to model training and optimisation. Our best performing model reached over 90% accuracy of identification of helmets and jackets on trials conducted so far. This video shows pure object detection, note how the boxes expand and contract and occasionally disappear as the helmet is rotated upside down (which is expected). We can also add logic to ensure the helmet is actually worn on the head, not being carried.
PPE Identification Demo
COMPUTER VISION AT THE EDGE
Alongside building the model, we worked with ADLINK to deploy to their Edge device for inference on site. The main photo for this blog shows our demo setup in the lab, with the laptop controlling and coordinating the other components. The Neon-i camera can be clearly seen on the left and next to it is the small demo inference engine, the ADLINK Vizi-AI. For interest, a layout of the Vizi-AI is shown in the following close up picture.
The Vizi-AI Module Used for Inference
The Vizi-AI was running Intel OpenVINO. This is Intel’s open-source computer vision technology. The model converter, mentioned above, reads in the Tensorflow model and creates the artifacts for deployment on to the Vizi-AI module. The Vizi-AI takes the feed from the camera and runs our inference on it to add the bounding boxes and the object detection confidence levels as a percentage value.
HEALTH & SAFETY OF CONSTRUCTION
Health & Safety at work is a top priority for Balfour Beatty and in construction generally. The estimated cost to the UK economy is over a billion pounds (UK Government HSE statistics).
What we have shown in this prototype demo is that it is possible for AI at the Edge to detect PPE. We have the opportunity to alert and advise construction health and safety leaders to ensure onsite corrective action can be taken. Limited statistics exist about how many workplace injuries and health issues are related to incorrect PPE, but the potential to improve the situation is clear. Based upon the success of this 1st phase, the next steps are to proceed to further model optimization and trials on real construction sites. Ultimately, we anticipate a global roll out so that these AI tools become standard across industries. If you would like more information and a more detailed demo please contact us.
Get started quickly with OpenVINO, Intel® Movidius™ Myriad X VPU and ADLINK Edge software.
To get started quickly with OpenVINO try the ADLINK Vizi-AI an EdgeAI Machine Vision DevKit – The Vizi-AI starter devkit includes an Intel Atom® based SMARC computer module with Intel® Movidius™ Myriad X VPU and 40 pin connector, Intel® Distribution of OpenVINO and ADLINK Edge™ software.