Tag Archives: Qt

Python – OpenCV IplImage To PyQt QImage

This is a quick post which will hopefully save someone some time. I spent far too long trying to figure this out.
If you’re using Qt and OpenCV with python and want to show an opencv iplimage within a Qt Widget here’s a quick hack to convert the iplimage to a qimage.


class IplQImage(QtGui.QImage):
"""A class for converting iplimages to qimages"""

     def __init__(self,iplimage):
         #Rough-n-ready but it works dammit
         alpha = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC1)
         cv.Rectangle(alpha,(0,0),(iplimage.width,iplimage.height),cv.ScalarAll(255),-1)
         rgba = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC4)
         cv.Set(rgba, (1,2,3,4))
         cv.MixChannels([iplimage, alpha],[rgba], [
         (0, 0), # rgba[0] -> bgr[2]
         (1, 1), # rgba[1] -> bgr[1]
         (2, 2), # rgba[2] -> bgr[0]
         (3, 3) # rgba[3] -> alpha[0]
         ])
         self.__imagedata = rgba.tostring()
         super(IplQImage,self).__init__(self.__imagedata, iplimage.width,iplimage.height, QtGui.QImage.Format_RGB32)

All in all it’s fairly straight forward, an example case is something like the following:


#Create a 3 channel RGB iplimage (could be from a webcam ect.)
iplimage = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC3)
#Turn it into a qimage
qimage = IplQImage(iplimage)

It works by sub-classing QImage and overloading the constructor with a new one that accepts an IplImage, this then has an extra alapha channel added (to make it compatible with the QImage pixel packing) and finally the __init__ method of the superclass (QImage) is called with the data from the iplimage passed into it as a string.

The important part is:


self.__imagedata = rgba.tostring()

This keeps a reference to the image data so it doesn’t go out of scope when the __init__ returns. (the QImage constructor that accepts image data doesn’t keep a local reference to the data, so you have to make sure it isn’t lost.[at least i think that’s right])

Advertisements

Kinect & Qt

Impulsed bought a kinect and decided to write a Qt wrapper for it. A fork of the libfreenect git repo with the wrapper included is available here at my spangly new github repository (Not quite figured this whole git malarky out so it’s probably gonna get broken at some point).
Heres a screen shot of the output from the RGB and Depth cameras (note: depth has had it’s dynamic range reduced from 11 to 8bits).

3D Scanner – Software Overhaul

So I’ve been having a tinker with the 3D scanner again and I’ve got it working with my Cannon SX200 + a custom CHDK remote trigger script. This means the actual scan time is now about 3/4 of an hour and I only get about 2 scans per camera charge, but! (and this is a big but) I get glorious 12 mega-pixel resolution and awesome image quality.

This has meant I’ve had to have a rather massive overhaul of the model generating software to reduce memory usage, add more controls to the GUI, remove the OpenCV dependency, and generally make it more usable. It’s not finished yet, but here’s is a look at an early version of the new GUI:

Still need to beautify the layout a bit and write a considerable chunk of code. But it hopefully should be worth it.

As always the code is open source and available from the repository, but be warned the current committed version is massively broken, so if you want a working version checkout revision 2 (yeah, I know it’s probably terrible practice to break the main repo’s code, but what the hell, it’s not like anyone besides me is actually using this stuff).

Timeslice Code Available

Set up a google code svn repo for the time slicing code for anyone interested:
https://code.google.com/p/videotimeslice/

Time Slicing Update

Been spending the past few evenings playing around with and rewriting the time slicing code I wrote a while back. It now pushes and pops the camera frames into a fixed size buffer and allows you to set the rotation of the “slice” through the buffer. The effect this achieves is quite difficult to describe so here is a video to demonstrate it.

Here’s a brief description of each of the sections.

  • Normal video recorded @ 640×480 with a cheap webcam in my front yard. Splitting the frames out using ffmpeg for processing greatly reduced the quality of the subsequent sections (probably need to play around with some switches).
  • Time slice set at 45 degrees, effectively making columns of pixels to the right of the image be further back in time the columns on the left. The video appears to “grow” from the centre at the start of the section as the frame buffer fills up.
  • The time slice slowly rotates about the centre axis, changing the view from one instance in time at 0 degrees to the “history” of  the centre column of pixels at 180 degrees.
  • Same as previous section but with the slice rotating faster. Note how objects at the edge appear to be sped up while objects towards the centre slow down, this is due to the rotation of the slice effectively speeding up the play back of columns of pixels at the edge of the frames.

Why am I wandering aimlessly through an overgrown yard? Well the idea was the movement of the plants would look cool, but the web-cam I used to capture the video was too shitty to pick up much detail. One of these day’s I’ll save up for a decent video camera.

Currently the slice is only rotatable in one axis, will hopefully change this in later versions to allow full 3D positioning, and will probably get round to releasing the code at some point aswell.