Mapping Your Perspective: Mapillary and Viewshed in ArcGIS Online
This is the first out of two blog posts that teach you how to combine the power of viewshed
and Mapillary to give a more complete idea of what is visible from a given point on the map.
You already know that photos can be geotagged. But the geotag only shows the location of the camera. Imagine instead if you could geotag a photo in another sense—if you could show on the map what area the photo depicts. With the viewshed analysis for ArcGIS Online, this is possible.
Mapillary photos provide a street-level view of the world, and are powerful building blocks for a variety of maps. Each photo’s EXIF data contains small but significant pieces of information, including the geographic coordinates of capture, the time of capture, camera make, and camera bearing. These can be used to transform a simple photograph into something more informative about the environment in which it was taken, which is the premise of Mapillary’s computer vision technology.
Esri’s ArcGIS software has long included a tool called viewshed
, which similarly takes a single point on a map and adds more meaning to its location. Viewshed compares the location of any point to a digital terrain model (DTM) or digital elevation model (DEM), determining the point’s altitude and then mapping which parts of the surrounding landscape are visible when standing at the precise point. Viewshed
even allows the user to set the height of a potential viewer, so it can be determined if the point is located at ground level or perhaps in a watch tower at 5 meters’ elevation. Viewshed
is also available through ArcGIS Online, and allows for quick calculations by clicking on an input point.
Combining the capabilities of Mapillary and Esri makes for an improved version of viewshed. The Mapillary API provides access to Mapillary data in the form of linestrings and points. The points from Mapillary can be used to estimate a viewshed, thus showing a map layer that indicates which zones are visible on the map as well as in the photo. In this experiment, our workflow will consist of the following:
1. Query the Mapillary API for GeoJSON Points
2. Put our GeoJSON into ArcGIS Online as a point feature layer
3. Run the Viewshed Analysis tool with Mapillary points as inputs
4. Compare the results to the content of each Mapillary image
A query to the API using a bounding box for Santa Catalina Island returns a GeoJSON with 100 points, each representing an image. We can test viewpoint from each image, thus allowing us to both see a photograph of what a human being would see standing in this position, as well as see on the map what locations are actually visible.
https://a.mapillary.com/v3/images?bbox=-118.3653259277343,33.309298673690044,-118.3014678955078,33.36035566675374&client_id=<your_client_id>&pano=true
The above API call sets a bounding box based on the lower left corner and top right corner (minx,miny,maxx,maxy). You need to get a client ID (see more information here) and replace the <your_client_id>
text with the ID code. Setting pano=true
returns only panoramic images.
The parameter for returning panoramic images is critical to using viewshed
correctly—viewshed
assumes the observer at a given point is going to be looking in all directions, so our 360-degree panorama will match this assumption. Non-pano photos could be facing only a certain angle, which would require us to do a more advanced calculation of viewshed limited to a certain field of view matching the camera make and bearing. In short, using the 360-degree panoramic images keeps it simple for us.
The page returned by the API call should be saved to your desktop, and it should have a .geojson
file type. Import the GeoJSON into ArcGIS Online by clicking Add Item
next to the plus sign, and selecting the file you saved.
Next we will add this to a web map, by clicking Open in Map Viewer, followed by Add layer to new map. This will load the points onto the map, then prompt us to edit the styling as well
For the styling, I chose to display key, which is the unique image key for each point. We can use this later to reference the 360° image that is taken at the vantage point we’re examining. I also chose the Location (Single symbol) drawing style, as I don’t have any need to differentiate images.
Next, we will perform an analysis, which specifically means we’ll apply viewshed
to our data. The icon for this is beside our dataset name.
Select Find Locations under the analysis categories, and Create Viewshed will appear as an option.
After selecting Create Viewshed, there are several parameters we can set. I select 7 feet as the height of the obeserver, because our 360° camera was mounted on a selfie stick and held overhead. I left the maximum viewing distance at a default of 9 miles. I also took the check mark off Use current map extent, as I don’t want to limit the analysis to only this section on the map.
Clicking Run Analysis will process our input points and create the new layer showing which areas are visible. The result is an indication of the sum total of area that are visible from all our points combined, rather than any particular point.
In the next steps, we are going to build a more precise version of what we’ve just done. We’ll choose a single point instead of a group of points, so we can examine the exact correlation between what’s visible in the photo and what viewshed says we can see. For this, we want to be as specific as possible in choosing a photo by making a new API call. We will use the closeto API call, which searches for the nearest image to a given set of geographic coordinates. I decided to center the map view near the Hermit Gulch campground, which lies to the southwest of the town of Avalon along the main road. The coordinates of the point I measured appear near 118.35W, 33.33N. I chose this point because when mapping the island, we had taken a hike up onto the steep hillsides which had great views of the surrounding area.
These coordinates can be put into the closeto
API call, but we must remember that the longitude of 118.35W
means west of the Prime Meridian, so for our API input this will be a negative value of -118.35
, while the latitude of 33.33N
remains a positive value, 33.33
. The API call requires these in the form of longitude,latitude
, as well as our Client ID. The default search radius is 100 meters, but because our center of screen coordinate is vague, let’s expand the radius to 200 meters with radius=200
. And we only want one result, not multiple, so we will add per_page=1
. The complete API call will look like:
https://a.mapillary.com/v3/images?closeto=-118.35,33.33&client_id=<your_client_id>&radius=200&per_page=1
This API call returns another GeoJSON to us, which looks like this:
{"type":"FeatureCollection","features":[{"type":"Feature","properties":{"ca":319.87,"camera_make":"LG Electronics","captured_at":"2016-10-01T08:15:04.000Z","key":"52huw-vODz5Yv86NNubtfA","pano":true,"user_key":"Q6aJsEtjqHAEDE0oMFc9Wg"},"geometry":{"type":"Point","coordinates":[-118.35143682055555,33.32907203361111]}}]}
We can’t add this layer to our map, because .geojson
format is only accepted when added directly from My Content
. We’ll add the new file just like before to a new map, and run the same analysis on it as a single point. Also, I set the styling to be the same as before, with the key
attribute displayed.
We can compare this to the photo itself, now that we have a single photo’s field of view. You can view the photo on Mapillary’s website, but also see below to start exploring.
From the map, it may seem odd that the viewshed
analysis indicates limited visibility near the point, but inspecting the photo shows that steep ledges and other terrain indeed obscure the observer’s view. We can also see the valley down below, several mountain tops, and have a view all the way to the blue sea beyond the town of Avalon.
In order to improve your experience, you can add as many points as you are curious about, save the map, then load it into Mapillary for ArcGIS Online. In our application, you can quickly pull up the imagery near the points of interest, and compare to your viewshed
layers.
A more advanced next step would be to match the viewshed
layer to a field of view for the given Mapillary image, even it if isn’t 360 degrees. The ones we worked with in this experiment have a 360-degree field of view, but a different image may only display a 60-degree field of view, so we’d have to slice the viewshed
layer to only show the direction the camera faces.
More developments on this method will be covered in an upcoming blog post, where we will use the Esri JS API to calculate a viewshed for each and every 360° image, on the fly. We will also demonstrate how to add Mapillary to a popup in the Esri JS API, and add the Mapillary vector tiles. Using these, we can truly combine the power of viewshed
and Mapillary to give a more complete idea of what is visible from a given point on the map—both in terms of geography and imagery.