Upgrading to Vistas 2.0

Today we are making available Mapillary Vistas 2.0, a major semantic annotation upgrade for our street-level image dataset of 25,000 images from around the world. We are increasing the label complexity by almost doubling the amount of semantic categories, and additionally provide an approximate depth order of objects shown in the scenes.

Introducing the Quality Score - Automated image quality estimation

Today we are announcing the introduction of our machine-learned Quality Score that allows you to filter images on mapillary by their quality.

Extending Object Detections to Scene Classes

Today, we are announcing the extension of machine-generated detections to scene classes. The new scene classes cover transportation infrastructure such as gas stations, toll stations, and parking lots and will help cities and community groups speed up their mapping efforts.

Learning with Verification: Improving Object Recognition with the Community’s Input

Thanks to our community that verified over half a million machine-generated object detections, we’ve developed an efficient approach to object recognition that helps improve map data quality. We used the verifications to include partially annotated images in our training data, leading to much higher detection accuracy compared to using only fully annotated images. This is a scalable way to get diverse training data for developing models that perform well on real-world object recognition tasks globally.

Introducing the Mapillary Street-Level Sequences Dataset for Lifelong Place Recognition

Today we’re releasing the Mapillary Street-Level Sequences Dataset, the world’s most diverse publicly available dataset for lifelong place recognition. Mapillary Street-Level Sequences is one of our three papers that will be published at CVPR later this year.

New Optical Flow Records using Mapillary’s Five Elements of Flow

In our latest work we reveal five key techniques for improving optical flow prediction — the task of estimating apparent 2D motion of every pixel in two consecutive images from a video. Our findings are the result of carefully analyzing shortcomings in existing works and thus help to improve a wide range of them. We quantitatively and qualitatively surpass the performance of directly comparable works and set new records on challenging optical flow benchmarks.

Achieving New State-of-the-Art in Monocular 3D Object Detection Using Virtual Cameras

We are introducing a new way of doing 3D object detection from single 2D images. The architecture is called MoVi-3D and is a new, single-stage architecture for 3D object detection. Starting from a single 2D image, it uses geometrical information to create a set of virtual views of the scene where the detection is performed using a lightweight infrastructure.

Unveiling our Latest Research: Multi-Object Tracking and Segmentation from Automatic Annotations

Access to high-quality training data is one of the most important requirements to push the boundaries with machine learning in computer vision. Today we’re unveiling our latest piece of research, where we roll out an entirely new way to generate training data for multi-object tracking and segmentation. The approach turns raw street-level videos into training data with unprecedented quality—even compared to results based on human-annotated data. By allowing machines to generate training data, the cost for training computer vision models can go down substantially. We validate our approach for multi-object tracking and segmentation and obtain new state-of-the-art results. Here is how.

The Data That Paves the Way: How We’re Building the First Open Dataset for HD Maps

We are building the first open dataset for maintaining and updating HD maps with Zenuity, AstaZero, RISE, and AI Innovation of Sweden. Together, we will collect map data in a highly controlled environment through cheap dashcams, lidar, and radar, in an effort to build a cost-effective way of updating HD maps and teaching autonomous vehicles to understand their surroundings through an HD map — even when the surroundings have changed remarkably.

Mapillary at ICCV 2019: Unveiling our Latest Benchmark Wins

Mapillary is heading to ICCV for a week packed with activities. Here’s where you will find us.

Towards a driverless future: How Mapillary is teaming up with Siemens to teach streetcars to see in a fully autonomous depot

Mapillary is teaming up with Siemens, Germany’s Federal Ministry of Transport and Digital Infrastructure, and others to make driverless streetcars in a self-operating depot a reality. The project takes place in Potsdam and will over the course of three years teach a driverless streetcar to get from A to B with the help of sensor fusion and street-level imagery that Mapillary is turning into map data to allow the streetcar to see.

Announcing the Second Global Verification Challenge

Verification projects help train the algorithms that identify objects in street-level imagery. More verifications mean more accurate detections, and that means better maps for everyone. Join us as we strive to complete one million verifications and compete for cameras and other prizes.

Protecting Privacy in the World of Better Maps: How Collaboration Paves the Way

Roughly 2 million images are uploaded to Mapillary every day. Mapillary’s computer vision algorithms automatically anonymize all images by blurring sensitive information like faces and license plates. Today we’re happy to reveal that our blurring algorithms are the best available for anonymizing street-level imagery. By uploading imagery to Mapillary, you get all the data you need without compromising on privacy.

Winning at CVPR 2019: Mapillary Tops Two Computer Vision Benchmarking Challenges

At CVPR this year, Mapillary won two computer vision benchmarking challenges. We will always keep pushing the boundaries of what is possible in computer vision, and it is our award-winning models that allow us to produce the highest quality map data possible.

Introducing the Mapillary Traffic Sign Dataset for Teaching Machines to Understand Traffic Signs Globally

Today we’re releasing the Mapillary Traffic Sign Dataset, the world’s most diverse publicly available dataset of traffic sign annotations on street-level imagery that will help improve traffic safety and navigation everywhere. Covering different regions, weather and light conditions, camera sensors, and viewpoints, it enables developing high-performing traffic sign recognition models in both academic and commercial research.

Training Machines to Attain a 3D Understanding of Objects from Single, 2D Images

We sit down with Peter Kontschieder, the Director of Research at Mapillary, to talk about “Disentangling Monocular 3D Object Detection”, the latest academic paper to be published by Mapillary’s Research team. Peter tells us about how 3D object detections made in single 2D images have the ability to improve mapmaking and push down the cost of autonomous vehicles, and how the team unveiled a fundamental flaw in the metric used by the most dominant benchmarking dataset in this area.

Introducing Seamless Scene Segmentation: Allowing Machines to Understand Street Scenes Better by Turning Two Models into One

Today we’re announcing that Mapillary will publish four papers at CVPR this year. In this post, we’re looking at the paper named Seamless Scene Segmentation, which, as a world-first, rolls out a new computer vision model that slashes up to 20% computing powers when teaching machines to distinguish between people, cars, and map data like traffic signs, together with its overall environment.

Full Speed Ahead: How Toyota Research Institute is Accelerating its Machine Learning Algorithms with Mapillary

Toyota Research Institute (TRI) is focused on developing state-of-the-art machine learning algorithms for autonomous driving to realize safe and accessible mobility for the future. In this blog post, Jie Li, Research Scientist at TRI, outlines how TRI utilizes the Mapillary Vistas Dataset as a benchmark for driving scene understanding algorithms providing geometric and semantic diversity at scale.

Doing way more with Less: Catching up with the Mapillary Research Team

Since 2016 when we opened our AI lab in Graz, the research team has been busy publishing papers, winning benchmarking competitions, and developing the building blocks that power Mapillary. Now we are celebrating the opening of a brand new Graz lab and looking back at how it all came together.

Analyzing Parking Signs at Scale: How Mapillary is Working with Amazon Rekognition to Help US Cities End Their Parking Troubles

Managing parking infrastructure is a billion-dollar problem for cities all across the US. There has been no easy way for cities and Departments of Transportation to access parking sign data, resulting in poor decisions around parking infrastructure and planning. Today, Mapillary and Amazon Rekognition are introducing a scalable way to help US cities get a handle of their parking infrastructure.

Building the Tools to Show Us the Way: How Mapillary is Ramping up Traffic Sign Recognition Globally

We’re releasing an update to Mapillary’s traffic sign recognition, featuring wider support of traffic sign classes globally, improved recognition accuracy, and traffic sign taxonomy.

Massive Memory Savings for Training Modern Deep Learning Architectures

Mapillary Research has developed a novel approach to training recognition models to handle up to 50% more training data than before in every single learning iteration. With this technology, we can improve over the winning semantic segmentation method of this year’s Large-Scale Scene Understanding Workshop on the challenging Mapillary Vistas Dataset, setting a new state of the art.

Human in the Loop: Perfecting AI Algorithms

Machine learning needs human input. By creating a loop where human feedback is provided to the output of AI detection algorithms, we can significantly improve the accuracy of the models and the resulting map data.

Map Data in the Era of Autonomous Driving

The development of autonomous driving sets high requirements to map data. Next to using advanced equipment to collect map data, collaborative mapping combined with computer vision is a lower-cost, faster, more scalable approach.

How to Make Time Travel Happen

The Time Travel feature on Mapillary is great for observing how places change in time. Here's an insight into how it works and what you can do to get more matches between images.

More Accurate Map Data: Improving 3D Reconstruction with Semantic Understanding

Reconstructing a 3D world from 2D images is not as straightforward for a computer as it is for humans due to the fact that some objects are moving around in the real world. Understanding an image scene through semantic segmentation improves the 3D reconstruction, resulting in more accurate map data and better navigation in the image viewer.

Towards Global Traffic Sign Recognition

We are taking a big step towards recognizing traffic signs all over the world by adding support for more than 500 traffic signs globally, together with an appearance-based taxonomy for traffic signs. This is the beginning of our journey of recognizing every road sign in the world, no matter where it is.

Building the MapillaryJS Navigation Graph

In MapillaryJS 2.0 we completely changed the way we retrieve data and build the navigation graph to improve performance. Here is how it works.

How accurate is Mapillary and how to improve?

The accuracy of Mapillary to put photos and objects on the map is in the hands of the community, literally and figuratively. Read on to find out how.