The code is now available via svn. The URL is
. Be warned! the code is awful….i mean really really pre pre alpha awful. Compile at your own risk, use at your own risk, and feel free to contribute if you can make any sort of sense out of the code (give me a shout first). The code is developed under netbeans, and is therefore structured as a netbeans project, there is probably a much more elegant way of setting up the svn so it doesn’t matter what IDE you use, but I don’t know it….so unfortunately it’s not set up in any such way.
In order to generate the models you’ll need 2 sequences of 100 images taken at 3.6 degree intervals around the object in question (atm its fixed at 100, sucks i know,will be changing this very shortly), one for the texture and one for the mesh points. They’ll probably look very much like this:
The Input Images
Start up the splinesweep program and click “Load Images” and select all the 100 laser spline images, then click “Load Textures” and select all 1oo full colour images. Hit “Generate Model” wait for a bit, and a .obj , .mtl and .png images should be created in the folder that the splinesweep program is located in.
There is one pretty major limitation at the moment that I’ll probably get round to fixing within a week, currently the program assumes the centre of rotation of the platform to be at point that’s sort of at the middle of the image, and unfortunately this can only be changed within the code.
A few notes on dependencies:
The program requires QT4 and OpenCV2.0, but OpevCV1.0 may work, all-though the program does expect to find the opencv libraries in /usr/local/lib as opposed to /usr/lib. Soon as Ubuntu 10.04 is released i’ll build a version that works nicely with things you can apt-get.
How the code works:
First a sobel filter is run on all the spline images to produce edge images, then a threshold is taken of all the images to produce a binary image. To produce the 3D points for the mesh a series of points up the height of the image are taken and moved along the length of the image until a maximum pixel value is found or the edge of the image is reached, if the edge of the image is reached the point is discarded. The 2D points for each image are then placed in a rotated fashion around a central 3D axis to produce the 3D point cloud.
Holes in the point cloud are then filled in and a surface mesh is calculated. The RGB values in the colour images of every corresponding 2D point are found then placed into an image to create the texture, and the mappings between the 3D points and points in the texture are calculated.
This is all then written out to 3 files, an obj file which stores the vertexs,faces, and texture UV coordinates, a .mtl file which lists and describes properties of the materials used to texture the model, and a .png file which is the actual texture.
There’s probably a bunch of little things going on that I can’t remember but this is as near as damnit to what’s actually going on.