Visualizing AI Detections for Improved Map Editing

Our new AI detections feature lets you quickly find images where objects have been automatically detected, making it easier to use Mapillary imagery to edit maps and geospatial datasets.

All the imagery that's uploaded to Mapillary gets processed with computer vision—the field of AI that deals with visual comprehension. We use this technology to spatially relate images from different contributors, understand what's in those images, and automatically generate map data. We're convinced that using computer vision is key to being able to make maps at scale in the future.

An important part of the computer vision that we apply to our imagery is called semantic segmentation. This is the process of differentiating the pixels of an image into a set of classes that denote different objects in the real world. That is, for each pixel in an image, we determine and record what it depicts. The output of semantic segmentation tells us what each contiguous group of pixels represents—each of which we call an AI detection. Today, we're segmenting images into 97 semantic classes.

Visualized, the output of semantic segmentation of an image can be rendered as color-coded areas overlaid on the image, where each contiguous colored area represents a different AI detection.

AI detections visualization

Visualizing AI detections on the map

Over the past couple of months, we've segmented all of our more than 168 million images. Since all the images are geotagged, we are now able to visualize the types of objects that have been detected in each image on a map. The geographic visualization of this segmentation data is the key element of our newly released AI detections feature.

AI detections feature

The AI detections feature allows you to view the location of all segmented imagery on the map, and then filter for images that contain a segment of a specified class. For example, it's possible to filter the map to show the locations of images that contain a segment labeled as a crosswalk. This means that you will be able to easily get an overview of the regions where these segments can be found, and then use the images from those regions to edit maps and geospatial datasets.

Once you've found images on the map that contain a segment of interest, clicking on an image will open the image view where the segmentation is overlaid and any classes you filtered are highlighted so that you can get a better visual reference.

AI detections image view

What's next

Visualizing semantic segmentation on the map is an important step along the way to automatically detecting map objects. Keep in mind that AI detections visualize the locations of the images where objects have been identified and not the location of the object itself. By triangulating the locations of the objects that have been detected in several images, we can generate map objects which represent, unlike AI detections, the actual locations of those objects in space.

Today, we are already able to do this for traffic signs, and are hard at work towards creating map objects for more of our 97 classes of detected objects. While AI detections make it easier to edit maps using street-level imagery, perhaps more importantly, they pave the way towards large-scale automatic generation of map data.

Meanwhile, we will keep working on the accuracy of the AI detections. We also intend to bring semantic segmentation data to our integrations with OpenStreetMap editors, ArcGIS, QGIS, and other tools.

Take a look at more detailed instructions on how to use the AI detections feature on our help centre, and as always, we'd love to hear from you! Please drop us a line if you have any questions or suggestions.

/Andrew

Continue the conversation