Understanding the challenge of AI training
Our most pressing milestone, to have a comprihensive data set of all ordinance used in the invasion of Ukraine. But what data…
Our main focus and what we understand will be the quickest ‘win and ‘needed’ deployable, our Object Recognition system.
This module runs inference on a RGB lvideo feed from our UAS’s downward facing camera, classifying each object with a probable match to known ordinance, an accurate location and an image of the local area is added to our central database and the operators mobile device.
To achieve this development track, we need data.
Each type of ordinance (mine/classification) needs to be trained into the object recognition model. To be able to train we need on average 1000 images per object, each images needs to be diverse, have a different setting/background but always from a top down point of view, 2 -10M above the ground. Collecting these images is no mean feat, this requires access to a library of physical ordinance and the ability to place these in diverse settings.
The physical method for taking the images is simple, with a DJI Mini 3 Pro or other consumer Drone, fly over the object, place camera downward and snap an image, move the object to a new setting and repeat and move location as needed. This is time consuming and requires the objects which must be decommissioned before handling. The challenge, the library.
Communicating with many key stakeholders and organisations across the land mine removal community, it’s our conclusion that non have such a library or data sets of this capacity, though all desperately need. This is our current operation, obtain and opensource for all.