news.

Fun with 3D reconstruction: Scanning and printing models from photo’s

One of the Media Lab’s favorite pass times, apart from academics of course, is printing various ‘useful’ items on our MakerBot. Coffee is poured and a seat is taken while we wait in amusement for every line of plastic to form part of some Star Wars character. With the sky as the limit, or rather 6700 cm3, you can image all the plastic possibilities. In this post I am going to investigate printing a 3D model obtained from photo’s.

 

Step 1: Finding free 3D reconstruction tools.

 

After some snooping around, I found the Python Photogrammetry Toolbox. It works under Ubuntu and Windows, but is quite a pain to setup. Do not fear though, I made a windows portable version, but keep in mind, as with most hacked up and glued together software, c:\do_not_put_spaces_in_folder_paths.

 

Step 2: Create a point cloud model.

 

The software includes some sample photo’s to be 3D reconstructed. By following the methods below I managed to get a fairly accurate point-cloud model from a few images. The output .ply files can be viewed and edited with meshlab. I will quickly guide you through the steps of creating a point cloud from images.
Test images for 3D reconstruction.

Test images for 3D reconstruction.

 

Create a sparse model by running Bundler.
Run bundler to create a sparse 3D model.

Run bundler to create a sparse 3D model.

 

Use the sparse model to create a dense model by running PMVS.
Run PMVS to create a dense 3D model.

Run PMVS to create a dense 3D model.

 

Finally the result. Since the photo’s did not cover all angles of the model, there is still a few holes and places where the model is quite sparse.
A 3D model reconstructed from images.

A 3D model reconstructed from images.

 

By increasing the quality and quantity of the images, a more dense model can be obtained as shown below. I took some photo’s of an Android puppet that is laying around the lab for a practical example. The resulting model is quite dense.
Photo's taken of an Android puppet.

Images of the lab’s Android puppet.

 

The resulting 3D model of the Android puppet.

The resulting 3D model of the Android puppet.

 

Step 3: Mesh the point cloud

 

I followed http://arc-team-open-research.blogspot.co.at/2012/12/how-to-make-3d-scan-with-pictures-and.html, but this step is more difficult than anticipated. It seems that there is no easy way to delete raw points within meshlab without generating a surface first. In turn, the generated surface does not suffice with the vast amount of erroneous points in the reconstructed model. After a few hours of struggling I gave up on meshing my point cloud. I will come back to this step in the next few months after figuring out how to easily manipulate the raw 3D points.

 

Step 4: Print the model.

 

As I did not reach this part of the experiment. The approach I would most likely take is to export the 3D model to a format readable by Bundler, correct the scale and dimensions of the model, convert the models to a format readable by the MakerBot and commence the printing.

 

In Conclusion

 

Although I did not reach my goal of printing a 3D model from images, it seems possible to create such a model from freely available software. The main drawbacks are the software setup and the missing feature of Meshlab to select 3D points not part of a mesh. By the density of the point cloud model, it seems with a bit more time and tweaking such a model will be quite accurate.
Please go try out the above software and give me some feedback on how it works. After I tie up all the loose ends, I will try to post some photo’s of how I reconstructed and printed a bust of myself.
No comments yet.

Leave a comment

Leave a Reply

(required)