The Future of Maps for Mobility (Presentation from Geography2050)

Take a look at the Mapillary presentation from the American Geographical Society's Geography2050 event.

Take a look at the Mapillary presentation from the American Geographical Society's Geography2050 event.

While high-resolution satellite imagery is the foundation of digital mapping, the demands of urban mobility require highly accurate, frequently updated data from a different vantage point—the ground. Recent advances in street-level imagery collection and data extraction are lifting up new trends in location-based services, smart cities, and autonomous vehicles. As map technologists push the boundaries of machine intelligence for extracting data from images, human collaboration will drive the creation of maps for mobility for all.

Click through the slideshow below or view it on SlideShare.

  • This is a street in Hong Kong, seen through the eyes of a machine, precisely distilling the image into segments that make sense: car, road, pedestrian, traffic light, crosswalk. This is called semantic segmentation, and it’s today’s computer vision technology.

  • Back in 1956, we imagined cars driving themselves along embedded electronic tracks on roads. We’ve done one better. We’ve instead started giving cars eyes—a broad array of sensors—to see and translate what's around them into maps so they can drive themselves.

  • But in the future, how do we create maps from 700 trillion images captured per day? Furthermore, how do we create maps that are accurate, complete, and fresh, so that they’re effective at the very moment your car is turning into a bike lane that wasn’t there yesterday?

  • What’s brought us here is what will get us there: great strides in computer vision combined with rapid improvements in cameras and GPS hardware. We all know the smartphone as a means to consume maps, but it’s also a tool to create maps.

  • Imagine walking along, holding your phone up, taking pictures continually—much like a sensor on a vehicle. The rectangular frames represent the position of the camera based on GPS data; from matched points we can create a 3D reconstruction of the scene. This means that from imagery alone, we can tell you the location of that tree in the park.

  • But how does the car know it’s a tree? How does it know it’s a park? From where we started in Hong Kong, we now go to Moscow, where we see again semantic segmentation in action.

  • The combination of these two processes—3D reconstruction and semantic segmentation—gives us what we need to automatically generate map data, fast. These examples: sidewalk density and traffic signs in San Francisco, both generated within minutes of the images hitting the cloud.

  • To see a larger-scale example of this, we go to Amsterdam. Last June, the City of Amsterdam released 800,000 high-resolution 360° images of the whole city. We stitched these into a dense 3D point cloud with segmentations colored in.

  • So now we have a wealth of information extracted from the imagery: from bikes and bridges to trash bins and traffic signs. This was all processed within days—not quite realtime, but that’s where we are going with computer vision.

  • The second driver of new maps is collaboration. OpenStreetMap is the heart of collaborative mapping and we’re seeing its spirit spread beyond itself as commercial maps and automotive OEMs realize that the world is too big take on alone. But how do we get different sources and formats of imagery to play nice?

  • We believe that a sensor-agnostic approach is a difficult but worthy computer vision challenge. If we support any kind of sensor on any vehicle or if we empower anyone, anywhere, with any device, we’re essentially scaling up the effort to build the most complete map of the world.

  • Our proof that this can work is in our numbers: over 200 million images amassed on Mapillary from different sources today. To visually demonstrate our reach, let’s have a look at my favorite one: a view from the Rothera Research Center in Antarctica.

  • But let’s bring this back to urban mobility. Here’s a look at the tool HERE Maps uses to review Mapillary imagery for map editing; on top of the data they collect using their fleet of street view cars. Why?

  • …to get more data. When HERE maps uses Mapillary imagery, they get access to data contributed by others: cities using Esri’s GIS suite of products, OpenStreetMappers, and other commercial mapping players.

  • The relationship is two-way. Contributions from HERE’s community are shared back: take a look at this swell of data since we started working with HERE in Astana, Kazakhstan. This data comes back to OpenStreetMap and other platforms to improve maps everywhere.

  • Maps everywhere should also mean maps for everyone. While all eyes are trained on news coming out of the automotive industry, the last point I’ll make is about how communities and individuals drive us to address mobility challenges beyond cars.

  • At Mapillary, we’ve learned that individuals are motivated by specific passions and interests as they map their world: local pride, bike safety, infrastructure improvements.

  • The epitome of this passion are events like this pedestrian walk in Mexico City last year, where able bodies came together with people with limited mobility to traverse the streets in wheelchairs.

  • The collected imagery contains map data about infrastructure such as curb cuts, sidewalks, and crosswalks.

  • While I’m lucky to work in the company of those who push the boundaries of machine intelligence every day, I’m constantly reminded that the future of mobility is in the hands and hearts of humans creating effective and equitable maps together.



Continue the conversation