Tag Archives: image processing

Open3DScanner – ***Insert pun about cutting (final cut?, making the cut?,something mustard related?)***

Spent the weekend in St Neots visiting my friend Pete where we drank heavily and sellotaped mobile phones to kites (A blog post on that later). Took the day off on Monday and we both travelled down to London and did some laser cutting on a machine rented at blueprint model shop. Here are some photos of the cut parts:

Unassembled parts

Push Fit Assembled

Ps3 Eye Mounting

As kind of expected a bunch of stuff is slightly wrong with the parts, but nothing that can’t be easily fixed.

It’s bloody awesome seeing something I’ve spent so long designing on a computer in actual-real life. Alas there’s plenty of work left to do on the software and electronics side of things, but it’s nice to have made this milestone.

Advertisements

3D Scanner Software – First Results

Heres one of the first results of the new and improved 3D scanner software:

Theres still plenty of bugs to iron out in the software, but when thats all done i’ll make an “offical” first release of the software which will be available here at the google code repo.
I’m going to try and get a number of different builds available as well (64bit GNU/Linux, 32bit GNU/Linux, and a Windows release).

Also here’s a quick look at the new portable 3D scanner being made:

3D Scanner – Software Overhaul

So I’ve been having a tinker with the 3D scanner again and I’ve got it working with my Cannon SX200 + a custom CHDK remote trigger script. This means the actual scan time is now about 3/4 of an hour and I only get about 2 scans per camera charge, but! (and this is a big but) I get glorious 12 mega-pixel resolution and awesome image quality.

This has meant I’ve had to have a rather massive overhaul of the model generating software to reduce memory usage, add more controls to the GUI, remove the OpenCV dependency, and generally make it more usable. It’s not finished yet, but here’s is a look at an early version of the new GUI:

Still need to beautify the layout a bit and write a considerable chunk of code. But it hopefully should be worth it.

As always the code is open source and available from the repository, but be warned the current committed version is massively broken, so if you want a working version checkout revision 2 (yeah, I know it’s probably terrible practice to break the main repo’s code, but what the hell, it’s not like anyone besides me is actually using this stuff).

More Timeslicing + Some Old Stuff

Got a new Cannon Powershot sx200is (was promptly CHDK’d) which can do some surprisingly good 720p video. So I thought I’d have a bash at running some footage through the timeslicing code I’ve written. Unfortunately the re-encoding with ffmpeg wasn’t so good so the quality is a bit out of whack, but here is is anyway with the wonderful BlackCloud1711 modelling for the footage:

Also here’s a couple of pictures of an old GCSE (I think) project. It’s a parallel port controlled pen drum plotter. Most of the mechanical hardware was built by my dad with a couple of design suggestions thrown in from me, all the electronics and software was done by me (before the days of fancy-pancy arduinos). It’s controlled through the parallel port with all the code written in QBasic on a IBM100 (100Mhz raaaawwww power, also the first computer that was mine, it was only about 10 years out of date).

One of these days I’ll get round to adding an arduino with a G-Code interpreter to it so I can get some delicious plotting action on the go.

Time Slicing Update

Been spending the past few evenings playing around with and rewriting the time slicing code I wrote a while back. It now pushes and pops the camera frames into a fixed size buffer and allows you to set the rotation of the “slice” through the buffer. The effect this achieves is quite difficult to describe so here is a video to demonstrate it.

Here’s a brief description of each of the sections.

  • Normal video recorded @ 640×480 with a cheap webcam in my front yard. Splitting the frames out using ffmpeg for processing greatly reduced the quality of the subsequent sections (probably need to play around with some switches).
  • Time slice set at 45 degrees, effectively making columns of pixels to the right of the image be further back in time the columns on the left. The video appears to “grow” from the centre at the start of the section as the frame buffer fills up.
  • The time slice slowly rotates about the centre axis, changing the view from one instance in time at 0 degrees to the “history” of  the centre column of pixels at 180 degrees.
  • Same as previous section but with the slice rotating faster. Note how objects at the edge appear to be sped up while objects towards the centre slow down, this is due to the rotation of the slice effectively speeding up the play back of columns of pixels at the edge of the frames.

Why am I wandering aimlessly through an overgrown yard? Well the idea was the movement of the plants would look cool, but the web-cam I used to capture the video was too shitty to pick up much detail. One of these day’s I’ll save up for a decent video camera.

Currently the slice is only rotatable in one axis, will hopefully change this in later versions to allow full 3D positioning, and will probably get round to releasing the code at some point aswell.

3D Scanner – Some Results

Here are a few scan results:

A Selection of 3D Models Displayed in Meshlab

From top left clockwise: Box of Swan filters, bottom half of a glue bottle, roll of solder , nodding monkey, candle in holder on coaster, bottom half of frijj bottle.

Slight disclaimer: The models are shown from there “best” angles. Some are deformed, due to not being able to set the centre of rotation yet in the scanning software, some have slightly dark meshes in places, due to the LED lights not being good enough, and all of them have had a laplacian filter applied to smooth out the meshes a bit. Also most of the objects were chosen because they produce good models given the scanners limitations . So there’s still plenty of room for improvement.

3D Scanner – C++ Code Available

The code is now available via svn. The URL is https://code.google.com/p/splinesweep/. Be warned! the code is awful….i mean really really pre pre alpha awful. Compile at your own risk, use at your own risk, and feel free to contribute if you can make any sort of sense out of the code (give me a shout first). The code is developed under netbeans, and is therefore structured as a netbeans project, there is probably a much more elegant way of setting up the svn so it doesn’t matter what IDE you use, but I don’t know it….so unfortunately it’s not set up in any such way.

In order to generate the models you’ll need 2 sequences of 100 images taken at 3.6 degree intervals around the object in question (atm its fixed at 100, sucks i know,will be changing this very shortly), one for the texture and one for the mesh points. They’ll probably look very much like this:

The Input Images

The Input Images

Start up the splinesweep program and click “Load Images” and select all the 100 laser spline images, then click “Load Textures” and select all 1oo full colour images. Hit “Generate Model” wait for a bit, and a .obj , .mtl and .png images should be created in the folder that the splinesweep program is located in.

SplineScan Screenshot

SplineScan Screenshot

There is one pretty major limitation at the moment that I’ll probably get round to fixing within a week, currently the program assumes the centre of rotation of the platform to be at point that’s sort of at the middle of the image, and unfortunately this can only be changed within the code.

A few notes on dependencies:

The program requires QT4 and OpenCV2.0, but OpevCV1.0 may work, all-though the program does expect to find the opencv libraries in /usr/local/lib as opposed to /usr/lib. Soon as Ubuntu 10.04  is released i’ll build a version that works nicely with things you can apt-get.

How the code works:

First a sobel filter is run on all the spline images to produce edge images, then a threshold is taken of all the images to produce a binary image. To produce the 3D points for the mesh a series of points up the height of the image are taken and moved along the length of the image until a maximum pixel value is found or the edge of the image is reached, if the edge of the image is reached the point is discarded. The 2D points for each image are then placed in a rotated fashion around a central 3D axis to produce the 3D point cloud.

Holes in the point cloud are then filled in and a surface mesh is calculated. The RGB values in the colour images of every corresponding 2D point are found then placed into an image to create the texture, and the mappings between the 3D points and points in the texture are calculated.

This is all then written out to 3 files, an obj file which stores the vertexs,faces, and texture UV coordinates, a .mtl file which lists and describes properties of the materials used to texture the model, and  a .png file which is the actual texture.

There’s probably a bunch of little things going on that I can’t remember but this is as near as damnit to what’s actually going on.