Advancing Map Data Extraction with Line Features
Every image on the Mapillary database gets semantically segmented—we detect 97 different classes of objects in the images and provide a way to visualize the results. You can search for different detections and locate the images that contain them. While this is useful for narrowing down visual inspection tasks, what we really want is to not display the locations of images but the locations of the objects themselves. We already do this for traffic signs today: by detecting the same sign in multiple images, we can triangulate its position and place it on the map as a point feature.
Today, we are making a new group of map features available in the same way. The common characteristic is that the real-life objects they represent are line-shaped and as a result, they are depicted as lines on the map. Some examples of these line features are lane markings, rail tracks, and curb cuts (see here for the full list).
Line features advance the ability to rapidly create map data and are useful for a number of purposes, for example:
Cities and citizen engagement organisations that want to improve mobility can identify e.g. sidewalks, bike lanes, and curb cuts.
Transportation agencies and departments that need to catalogue and maintain road assets can get of overview of e.g. guardrails and traffic islands.
Map makers can easily add data such as lane markings and parking that can be used in navigation and guidance by both humans and machines.
Viewing line features
Line features can be viewed on the Mapillary web app. Together with adding this view, we’ve rearranged the interface a bit so that all extracted map data can now be accessed from the same menu at the top of the app.
After turning line features on, you can search for specific object types to be displayed on the map. You’ll notice the regular Mapillary sequence overlay in the background in a lighter grey color— when you click on any of the image markers around the line feature, you’ll also be able to see the detections of the respective object highlighted in the images (as a result of semantic segmentation).
Where do we go from here?
Making more object classes available as map features, not just detections in images, means that you can use more map data to speed up workflows that include locating real-life objects on the map. For instance, you can use them for OpenStreetMap editing (we plan to make line features available in JOSM and iD editor in the same way as traffic signs are today).
Line features are matched against existing map data. The features currently available on Mapillary have been aligned with the OpenStreetMap road network but our processing system has been built so that it can utilize any map data to match against. This means we will be able to extract line features aligned to any road network data that you provide.
The processing pipeline can be applied to any appropriate semantic class—which in this case means line-shaped real-life objects. We intend to make more classes available in time, and once we start providing custom detections, we will also be able to derive map features from them. As a result, we can virtually provide you with any kind of line features that you need.
/Andrew