Denser 3D Point Clouds in OpenSfM

We've improved OpenSfM—the technology we use to create 3D reconstructions from images. By adding post-processing, we get denser 3D point clouds resulting in better visualization, positioning, and much more.

We've improved OpenSfM—the technology we use to create 3D reconstructions from images. By adding post-processing, we get denser 3D point clouds resulting in better visualization, positioning, and much more.

At Mapillary we build and use OpenSfM to find the relative positions of images and create smooth transitions between them. That process is called Structure from Motion. It works by matching a few thousand points between images, and then figuring out the 3D positions of those points as well as the positions of the cameras simultaneously.

The result is the precise positions of the cameras in space and a sparse set of 3D points. Those points have served their porpose well, but they don't look that great when visualized—they are too sparse.

We have now added post-processing to OpenSfM that, given the camera positions, computes a denser 3D point cloud of the scene. Below you can see what it gives when you feed it with images from Le Mans by SOGEFI.

You can explore the images that were used to produce the point cloud here.

As you can see, this gives a pretty accurate reconstruction and positioning of the objects in the image—much like what you would expect from laser scanners, but computed from regular images.

One can do many interesting things with such data: better visualization, positioning objects on a map, using it as a visual map to relocate cameras (or cars with cameras), measuring, detecting changes, and more.

Now that the Graz AI team at Mapillary have created a segmentation tool that is able to label every pixel on an image, we explored a particular application.

With each of the images segmented into semantic categories (cars, road, side walk, building, etc.), we can color each 3D point with the color corresponding to the category of the pixels that generate the point. The result is a 3D point cloud that encodes both what and where objects are.

How to use it

The algorithm is implemented in OpenSfM. If you have that set up, you can run

bin/opensfm undistort path_to_data
bin/opensfm compute_depthmaps path_to_data

We have also added an exporter from OpenSfM to openMVS. OpenMVS is a library for multiview stereo. It can compute the dense point cloud like OpenSfM, and also compute a textured mesh. You can export the dense point cloud to openMVS by running

bin/opensfm export_openmvs path_to_data

Then you can run openMVS commands to produce a mesh.

OpenSfM is also used internally by the OpenDroneMap project. OpenDroneMap creates 3D models and orthophotos from drone imagery. It includes all the steps of the pipeline so one can go from images to 3D models in one command. It uses OpenSfM to get the camera positions, and can now use it to compute the dense point clouds as well.

I hope you will give it a try, or otherwise check out some reconstructions here. Enjoy!

/Pau & the vision team

Continue the conversation