Introducing the Quality Score - Automated image quality estimation

Today we are announcing the introduction of our machine-learned Quality Score that allows you to filter images on mapillary by their quality.

Seven years ago, Mapillary started with the audacious vision that street level imagery could be captured by “anyone with any camera” and published on a shared, open platform. This is in stark contrast to closed systems with restrictive licensing and imagery collected by expensive capture vehicles.

One of the most frequently asked questions about our data is centered on the quality of the imagery on the platform. The answer is not trivial, and today, we are happy to announce a new feature that directly addresses this question: Quality Score.

Defining and visualising image quality

One reason it’s hard to answer the question about image quality is that it’s not obvious how to express or define quality. There are of course the “easy” cases of a completely black image, or a very blurry one. But beyond the simplest cases, it is often hard to define if an image is “bad” or “good”. When you have two high-quality images, it becomes even harder to determine which one is “better”. Furthermore, for street-level imagery, the definition of “quality” may depend on the use-case: do you want to visually explore exotic places, or automatically extract benches using AI for mapping purposes? For the former you may be looking for “nice” images at very high resolution, whereas the latter may simply be driven by the metric of how well the detection algorithm works on the given type of image.

In spite of these challenges, it is important to gather an overall understanding of the quality of imagery on Mapillary. Quality can be useful for many applications, for example, only surfacing high quality images in map editing software, for image collection review and audit purposes, or simply as an overall platform performance metric.

To that end, we defined an image quality score which is a weighted composite score of various image properties.

Calculating the Quality Score

In order to learn and predict properties that affect the image quality rating, we trained neural networks for image classification. This is the same type of task and network that was used for our recently introduced scene class beta.

Our Quality Score combines properties of the image contents that are estimated using neural networks and “static” properties such as the image resolution. To be specific, the following properties are used to calculate a single score per image:

  • Blurriness (motion blur / out-of-focus)
  • Occlusion (camera mount, ego vehicle, water-drops)
  • Windshield reflections
  • Bad illumination (exposure, glare)
  • Bad weather condition (fog, rain, snow)
  • Time of capture (night images)
  • Bad capturing settings (close-up, non-street level)

The resulting score is published as 5 discrete values (1 to 5, with five being the best quality) in our web application and allows for filtering the imagery based on quality:

Image viewer

Quality score set to 5

Quality score set to 2

Quality score set to 4

Next steps

Moving forward, to improve the accuracy further, we would like to set up verification projects for feedback data to improve and re-train the prediction models. Another direction will be to train an end-to-end prediction model for image Quality Score with curated dataset.

We would like to thank our community again for their images to allow us to develop this capability. We look forward to feedback and will work towards better solutions for quality estimation for scalable mapping with street-level imagery.

/ Till & Christian

Continue the conversation