This company specializes in providing drone-based aerial surveillance and security solutions tailored to the needs of businesses in agriculture, construction, and real estate. Leveraging high-resolution imaging and real-time data collection, they enable businesses to gain actionable insights, enhancing decision-making and operational efficiency in their respective industries.
The client has developed a proprietary AI system to detect and track drones or other objects of interest captured via drone footage in various challenging environments. They were focused on optimizing this AI, specifically its ability to distinguish between drones in flight as well as identify and track drone movements precisely (at varying altitudes, in diverse lighting scenarios, and across all stages of flight).
For this purpose, they needed accurate video annotation services. Our team was provided drone footage (captured using both standard and infrared cameras). The footage included recordings taken in various conditions—daylight, nighttime, low visibility, and during high-speed drone movements.
The footage provided to our video labeling team was 55 hours long (approximately 100,000 frames). However, annotating this footage involved certain challenges:
A dedicated team of twenty data annotation experts was assigned to this project. As per client instructions, we worked on CVAT- a web-based, open-source image and video annotation tool originally developed by Intel.
Here’s the human-in-the-loop video annotation approach we adopted for this project: