Managing parking infrastructure is a billion-dollar problem for cities all across the US. There has been no easy way for cities and Departments of Transportation to access parking sign data, resulting in poor decisions around parking infrastructure and planning. Today, Mapillary and Amazon Rekognition are introducing a scalable way to help US cities get a handle of their parking infrastructure.

img1

Today we’re announcing our work with Amazon Rekognition to help US cities put an end to their parking woes. Amazon Rekognition is a deep learning-based image and video analysis service that we’re now working with to extract parking sign data from the 360 million images that have been contributed to the Mapillary platform.

Our traffic sign recognition algorithm detects 1,500 classes of traffic signs. Combining this with Amazon Rekognition’s Text-in-Image feature, we are now able to extract the text from detected parking signs across the US to add additional important details.

img2 The full workflow of generating the parking sign data, demonstrating the interaction between Mapillary’s traffic sign recognition and Amazon Rekognition’s Text-in-Image feature

Here’s how it works: when images are uploaded by contributors to the Mapillary platform our award winning semantic segmentation model automatically finds parking signs within the images. Once detected, Amazon’s Text-in-Image feature is applied to extract the text within each sign. This is a computer vision-powered and automated solution to the problem of managing parking sign data.

gif1
Parking sign detected and decoded by Mapillary’s traffic sign recognition and Amazon Rekognition’s Text-in-Image feature

This solves a huge problem for cities and DOTs all across the US, as there has been no easy way to access parking sign data. City authorities have typically gone out on foot to capture images of parking signs before analyzing them manually. This is, of course, not a scalable way of doing things, which is why comprehensive parking data is unavailable in many areas of the world.

As a result of unscalable methods, parking is in a terrible state across much of the US. In Washington DC, one to two reports on conflicting parking signs are made daily, with each review taking four months for city officials to address. Meanwhile, American drivers waste $73 billion annually in time and fuel looking for parking spots. New York drivers, for instance, spend more than four days annually looking for parking.

With the work we’re doing with Amazon Rekognition, cities across the US can now get a comprehensive understanding of their parking infrastructure through the Mapillary platform. What used to take months or years can now be done in an automated and computer vision driven way.

We launched Mapillary for Organizations earlier this year to give GIS and city planning teams the ability to subscribe to map data in an automated way. Our work with Amazon Rekognition means we're making significant headway to include parking sign data as part of that offering.

Parking signs is just a first step - we’re going to apply text recognition to more objects that we detect in images. This will enrich the data that we extract from imagery contributed to Mapillary’s platform. Stay tuned!

/Jan Erik

If you’re interested in buying parking sign data, contact us here.

Tags for this post: news mapdata computervision
comments powered by Disqus