When operating AI models in the real world you want to be able to collect training data when the confidence level of specific items a model is inferencing against falls below a set threshold.
Within the vizi-ai-starter-kit there are a number of applications to start the process of configuring your Vizi-AI to gather training images automatically firstly we want to configure the ‘training-streamer’. Click on the ‘training streamer’ app to configure it.
The default app configuration is displayed.
Complete the ‘logLevel’ and ‘contextId’ as required and then click ‘+ Add New’ next to ‘Ftp’ section where the required fields are displayed for completion as follows:
Once you have completed that section you must complete the ‘StreamId’ for where the stream is sourced, note that images created based on the criteria given are also stored in a folder with this name.
I have detailed below an example of some criteria that a user may wish to extract from the default configuration within the vizi-ai-starter-kit.
This example defines that should a ‘person’ be identified in the ‘demoStream’ and the average probability given of it being a ‘person’ is returned as below the configured threshold, the training streamer captures the image and sends it to the designated folder for the model to be retrained from.
These entries show that when the ‘demoStream’ is running and the ‘detectedObjectLabel’ is defined as having an ‘AverageProbability’ of less than ‘0.70’, which is equal to 70%, it saves a file to your FTP site for retraining.
*Please note that the entry in ‘detectedObjectLabel’ MUST be format the same way as the intended object or it will not recognize it. For clarification you can find these items in ‘nodeRED’
Add additional ‘Imageacquisitions’, click on ‘+ Add New’ and another section appears.
Below I have entered criteria for an additional threshold detailing where a ‘fire hydrant’ is identified and has an average probability of less than ‘0.75’
Once you have entered your criteria ensure that you save the amended configuration by clicking the ‘Save changes’button at the top of the screen then ‘Close’ to return to the profile.
Now the ‘Training-Streamer’ has been configured you need to configure the ‘Model-Confidence’ app.
Click on the name of the ‘Model-confidence’ app to open it, start to configure the application from the default provided as follows:
Depending on what criteria was entered in the ‘Training-Streamer’ application, by default the ‘streamId’ within the vizi-ai-starter-kit is demoStream.
Based on the criteria entered in ‘Training-Streamer’ shown earlier, the values in the ‘Streammetrics’ are required, as shown below:
You can see I have copied the ‘streamId’ that was in the ‘training-streamer’ and entered ‘detectedObjectLabel’ criteria for both ‘person’ and ‘fire hydrant’. Along with this I have input a preferred ‘timeWindow’ of 10 seconds, this calculates a confidence value based on the prediction probabilities across a sliding window over this time.
Once you are satisfied with your criteria ensure that you save the amended configuration by clicking the ‘Save Changes’ button at the top of the screen then ‘Close’ to return to the profile.
Now you need to deploy the profile to your Vizi-AI, to do this click the ‘Deploy’ button on the profile screen.
You are then presented with three options to deploy your profile. Select Deploy directly to a device and click Next.
You are then given the option to select the device that you want to deploy your profile directly to. Select the device and click Deploy.
When you see the green Success notification at the bottom of the screen this means your profile has successfully sent to the device, it may take a minute or two for the profile to be deployed, you can now click Close to exit the deployment option.
Once your profile is running and your stream can be monitored, the files start to appear in the folder location as below when they fit the criteria given in ‘training-streamer’