Updating Maps with Cameras: 42 New Object Classes Now Available as Map Features
Mapillary is on the mission to help fix the world’s maps by making map updates available at scale. The old way of collecting data and editing maps manually is too time-consuming to keep up with how fast cities and roads are changing. This is a growing problem—mapping companies, transportation agencies, and city governments alike are struggling to keep their map data up to date.
With Mapillary, anyone can use simple cameras to collect street-level imagery and generate map data. Our computer vision technology can “see” which objects have been captured in images, which allows us to position them on the map. Mapillary makes this data available to everyone so that people and organizations across the world can use the data to make their maps as detailed and up-to-date as possible.
Mapillary has been able to detect about 1,500 different types of traffic signs for some time already, resulting in 18 million traffic signs being automatically placed on the global map. Today we’re announcing another 42 object classes to the set of map features that we extract and place on the map. This means fresh data at scale for a lot of use cases across various industries, such as:
- Crosswalks, street lights, and benches for pedestrian mobility;
- Bike racks and bicycle traffic lights for cycling;
- Lane markings, traffic lights, and parking meters for transportation;
- Manholes, utility poles, and trash cans for public works;
- Traffic cones and construction barriers for road maintenance projects.
And many more—you can see the full list of supported object classes here. Altogether, we’ve extracted more than 186 million objects as map features across the 430+ million images that have been contributed to the Mapillary platform from all over the world.
You can see the new types of map features on the Mapillary web and retrieve them in data files or via the API by getting a subscription through Mapillary for Organizations. Currently, the new object classes are available in beta, under the same subscription plan as traffic signs.
To explore the new map features, open the “Map data” menu and flip the “Points” switch. If you wish, search for specific features.
Turning on point features in the Map Data menu
Zoom in to a city to see the data appear. If you click on the icon of a map feature on the map, you will also see the images where this particular feature is visible.
Viewing a point feature for a trash can, together with the images that this trash can was detected in
To use the map features for mapping or GIS work, you need to get a data subscription. Just set up an organization on Mapillary, define your area of interest, and choose your subscription plan. You can then access the data via API or download the latest data at any time throughout your subscription period, as many times as you wish.
When retrieving map features as a dataset, you will get the type of the map feature (such as street light, fire hydrant, manhole, etc.) and its estimated latitude and longitude in a standard geospatial format (shapefile or GeoJSON). Learn more about subscriptions and data downloads in our Help Center.
Understanding map features
With map features, the quality of the outcome depends on both the technology as well as the input—that is, the captured imagery. Shortly put, we use computer vision to detect objects in images and reconstruct places in 3D. By combining the two, we can estimate the coordinates of each object, and make that data available as map features.
Using computer vision, we can detect objects in images. Combining multiple detections, we can estimate the object's position on the map as a map feature.
To estimate the location of an object on the map, the object needs to be detected in two or more images. The positions of the images are then used to calculate the position of the map feature. This means two things:
- The accuracy of the location of the map feature will be influenced by the accuracy of the location of the images. You can help improve accuracy by using a high-precision GPS device when capturing.
- More images in the area mean more data points to triangulate the position of the object more closely. In other words—you can help improve accuracy by capturing images more densely.
Since the imagery itself will always play a significant role, we strongly encourage you to consult our Help Center before you start capturing. If you have any questions, just let us know—we’d love to help.
As mentioned, the new map features are currently released in beta. Your feedback will help us prioritize future developments. On the technology side, we’ll focus our efforts on making the detection algorithms ever more accurate and improve the quality of our 3D reconstruction.
We hope you’ll give the new map features a try—we can’t wait to see how different organizations across the world will use this. If you have any questions or would like to discuss your particular use case, don’t hesitate to get in touch. Here’s to making maps more detailed and up-to-date as easy as possible!
/Gerhard, Computer Vision Engineer