More Ways to Teach the Machine to See: Spotting Missed Objects
Every day, the imagery on Mapillary helps improve maps and geospatial datasets across the world. For example, the ground-level perspective as a complement to satellite imagery is a crucial element in mapping workflows. But many of these map editing processes are still manual and time-consuming. This is why Mapillary accelerates these workflows with automatically extracted map data by leveraging our world-class machine learning algorithms.
To keep a machine learning system learning, it needs to be fed with fresh data and feedback on its mistakes. Also, to ensure the quality of the automatically extracted data, efficient human review processes are needed.
That’s why a little while ago we announced Verification Projects: any organization on Mapillary can set up a project for humans to verify the machine-generated object detections in imagery from a geographic area. The tasks in the project are available to our contributor network through the Mapillary Marketplace.
This first version of verification projects allows checking if an object that was detected by the algorithm is, in fact, a correct detection. In computer vision terms: the task is to determine if the detection is a true positive or a false positive. To date, our contributor network has verified more than three million object detections already!
But that’s only one kind of feedback. We also need to learn about objects that the algorithm missed. In computer vision terms, we need to determine false negatives and true negatives. And this is exactly what this new release of verification projects is about.
How to do the tasks
As before, if you’re interested in helping out with verification, head over to the Mapillary Marketplace (available also in our mobile apps) to browse the available projects. When you open a verification project page, you will notice that for each object class there are now two tasks.
The newly introduced “spotting missed objects” task works as follows: you are presented with an image and need to determine if there are any objects of the given class that the algorithm missed. Here’s an example:
In this example, you can see that we’ve missed to tag a cyclist traffic light. That means you should reject the image by clicking the red thumbs-down button. Note that at the top, you can see a count of how many objects of this kind we have detected in this crop. Sometimes it can also happen that we’ve detected some but still missed others, in which case you should still reject the image.
Note that some images are rather large and each area needs to be checked carefully for missed objects. To make this easy, we split the image up and serve it as smaller tiles. This way, a section of the image can be verified at a glance, and quickly confirmed or rejected just with a single tap. This is especially useful on mobile screens. Also, the behavior is very similar to the existing task of verifying detected objects, meaning it’s convenient to switch between task types.
Note that we’ve also made the project page more informative by introducing progress bars for tasks. This way, you can decide to either help get a task across the finish line, or to move the needle on a task that has fewer contributions.
Just like before, you’ll get points for doing verifications, and compete with other contributors for the top spots of the leaderboard. You may notice that you won’t get a point with each click, like in the previous task type—that’s because you get points for every image you’ve checked, and since it’s split into smaller tiles, you have to do them all (six altogether) to get a point for the image.
How to set up tasks
To set up tasks for spotting missed objects, everything works exactly the same way: you need to have an organization on Mapillary and use your dashboard to set up a verification project. You just pick your object classes and geographic area of interest, and the new task type for spotting missed objects is automatically added to the project.
To improve the way you manage your project, we extended the admin dashboard with additional information. Similar to the public project page, you’ll see a progress bar for each class and task. This way you can see the overall progress across all the classes in your project at a glance.
For each class, you can open up additional details on progress and other statistics, and these, of course, now also include data on the new task type for spotting missed objects.
All projects can be published on the Mapillary Marketplace exactly the same way as before, by choosing the option during the project setup. The new task type will show up for contributors on the public project page as described above.
Note that for projects that were created before this release, we’ve automatically added the new task type for each object class as well, so you don’t need to do anything extra for that.
What happens to the images with missed objects?
The images where the algorithm missed some objects need to be annotated, and later fed to the machine learning algorithms to improve their performance with the new data. This annotation process we currently do in-house with professional annotators. In other words, all images that contributors identify as missing some annotations are queued up for our annotation teams and used for training algorithms in an off-line process.
We hope you enjoy this update. To try it out, here is a good project to start with, and as mentioned, you can find many more on the Mapillary Marketplace. If you have any feedback or questions, send them our way by emailing firstname.lastname@example.org.
/Till, VP of Product