Tag Archives: opencv

Python – OpenCV IplImage To PyQt QImage

This is a quick post which will hopefully save someone some time. I spent far too long trying to figure this out.
If you’re using Qt and OpenCV with python and want to show an opencv iplimage within a Qt Widget here’s a quick hack to convert the iplimage to a qimage.


class IplQImage(QtGui.QImage):
"""A class for converting iplimages to qimages"""

     def __init__(self,iplimage):
         #Rough-n-ready but it works dammit
         alpha = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC1)
         cv.Rectangle(alpha,(0,0),(iplimage.width,iplimage.height),cv.ScalarAll(255),-1)
         rgba = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC4)
         cv.Set(rgba, (1,2,3,4))
         cv.MixChannels([iplimage, alpha],[rgba], [
         (0, 0), # rgba[0] -> bgr[2]
         (1, 1), # rgba[1] -> bgr[1]
         (2, 2), # rgba[2] -> bgr[0]
         (3, 3) # rgba[3] -> alpha[0]
         ])
         self.__imagedata = rgba.tostring()
         super(IplQImage,self).__init__(self.__imagedata, iplimage.width,iplimage.height, QtGui.QImage.Format_RGB32)

All in all it’s fairly straight forward, an example case is something like the following:


#Create a 3 channel RGB iplimage (could be from a webcam ect.)
iplimage = cv.CreateMat(iplimage.height,iplimage.width, cv.CV_8UC3)
#Turn it into a qimage
qimage = IplQImage(iplimage)

It works by sub-classing QImage and overloading the constructor with a new one that accepts an IplImage, this then has an extra alapha channel added (to make it compatible with the QImage pixel packing) and finally the __init__ method of the superclass (QImage) is called with the data from the iplimage passed into it as a string.

The important part is:


self.__imagedata = rgba.tostring()

This keeps a reference to the image data so it doesn’t go out of scope when the __init__ returns. (the QImage constructor that accepts image data doesn’t keep a local reference to the data, so you have to make sure it isn’t lost.[at least i think that’s right])

More Timeslicing + Some Old Stuff

Got a new Cannon Powershot sx200is (was promptly CHDK’d) which can do some surprisingly good 720p video. So I thought I’d have a bash at running some footage through the timeslicing code I’ve written. Unfortunately the re-encoding with ffmpeg wasn’t so good so the quality is a bit out of whack, but here is is anyway with the wonderful BlackCloud1711 modelling for the footage:

Also here’s a couple of pictures of an old GCSE (I think) project. It’s a parallel port controlled pen drum plotter. Most of the mechanical hardware was built by my dad with a couple of design suggestions thrown in from me, all the electronics and software was done by me (before the days of fancy-pancy arduinos). It’s controlled through the parallel port with all the code written in QBasic on a IBM100 (100Mhz raaaawwww power, also the first computer that was mine, it was only about 10 years out of date).

One of these days I’ll get round to adding an arduino with a G-Code interpreter to it so I can get some delicious plotting action on the go.

Time Slicing Update

Been spending the past few evenings playing around with and rewriting the time slicing code I wrote a while back. It now pushes and pops the camera frames into a fixed size buffer and allows you to set the rotation of the “slice” through the buffer. The effect this achieves is quite difficult to describe so here is a video to demonstrate it.

Here’s a brief description of each of the sections.

  • Normal video recorded @ 640×480 with a cheap webcam in my front yard. Splitting the frames out using ffmpeg for processing greatly reduced the quality of the subsequent sections (probably need to play around with some switches).
  • Time slice set at 45 degrees, effectively making columns of pixels to the right of the image be further back in time the columns on the left. The video appears to “grow” from the centre at the start of the section as the frame buffer fills up.
  • The time slice slowly rotates about the centre axis, changing the view from one instance in time at 0 degrees to the “history” of  the centre column of pixels at 180 degrees.
  • Same as previous section but with the slice rotating faster. Note how objects at the edge appear to be sped up while objects towards the centre slow down, this is due to the rotation of the slice effectively speeding up the play back of columns of pixels at the edge of the frames.

Why am I wandering aimlessly through an overgrown yard? Well the idea was the movement of the plants would look cool, but the web-cam I used to capture the video was too shitty to pick up much detail. One of these day’s I’ll save up for a decent video camera.

Currently the slice is only rotatable in one axis, will hopefully change this in later versions to allow full 3D positioning, and will probably get round to releasing the code at some point aswell.

Time Slicing part 2

So here is a new time-sliced video and the opencv code used to create it. The video is under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK license and the code is GPL’ed.

In order to get the code to work you will need the opencv image processing libraries as well as the libboost libraries. The code was built and tested on Ubuntu 9.10 using Opencv 2.0 (not available in repos, will have to build yourself. Earlier versions of opencv probably should work.)

In order to use the program you will have to split out the individual frames of the video using this command:

ffmpeg -i filename.ogv -r 30 -f image2 %03d.jpg

Then run TimeSlice in the directory thusly:

./timeslice *.jpg

This will produce the individual time sliced frames (Overwriting some of the original frames). Now delete the remainder of the original frames (if there are any) and run the following command to create a .avi:

ffmpeg -sameq -r 30 -b 7200 -i %03d.jpg test.avi

/*
* File:   main.cpp
* Author: matt
*
* Created on 03 March 2010, 15:51
*/

/*This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/>.*/


#include <stdlib.h>
#include <cv.h>
#include <cvaux.h>
#include <highgui.h>
#include <iostream>
#include <iomanip>
#include <boost/intrusive/list.hpp>
using namespace std;

/*
*
*/
int main(int argc, char** argv) {

std::vector<IplImage*> image_vector;

//Load images
cerr << "Loading Images" << endl;
for (int loop = 1; loop < argc; loop++) {
IplImage *image;
cerr << "    Loading: " << argv[loop] << endl;
image = cvLoadImage(argv[loop]);

image_vector.push_back(image);

}
//return 0;
cerr << "Finished Loading" << endl;


//Process Frames
int depth = 0;

cerr << "Rotating images about y" << endl;
for (int depth = 0; depth < image_vector[0]->width; depth++) {
IplImage *result = cvCreateImage(cvSize(image_vector.size(),
                          image_vector.back()->height), IPL_DEPTH_32F, 3);

for (int x = 0; x < image_vector.size(); x++) {
for (int y = 0; y < image_vector[x]->height; y++) {
CvScalar pix = cvGet2D(image_vector[x], y, depth);
cvSet2D(result, y, x, pix);
}
}
std::string s;
std::stringstream out;
out<< setw(3) << setfill('0') <<depth;
s = out.str();
s.append(".jpg");
cvSaveImage(s.c_str(), result);
//result_vector.push_back(result);
}
cerr << "Done rotating images about y" << endl;

return (EXIT_SUCCESS);
}

Polygon Image Compression

This is a pale imitation of an awesome algorithm i saw somewhere that i can’t find any more.
What it does is fill a number of images with 200 randomly sized and coloured 3 sided polygons. It then compares them all with the output image and selects the best matching 10. These are then mutated slightly and randomly combined together. This repeats a number of times. The final image is then an approximate of the input image, but made up from 3 sided polygons.

Well, that was the idea, but i’ve ran out of steam on this one. So far i’ve implemented most of the evolutionary algorithm, apart from mutating/crossing the colours each generation.

The plan was to extended this for encoding video and speed it up a bit (took a bout an hour to encode a single image), and also getting it to approximate the input image better.

Image Comparison

Original image and processed image side by side

Time Slicing

Not in the cool Terry Pratchet “Theif of Time” way. More in the “messing-around-with-video-to-get-funny-effects” way.

So imagine video as a set of still images. Then imagine stacking these pictures one in-front of the other. In order to play the video you would have to start at the front of the stack and move backwards making each of the images totally transparent.

Now take that same stack of images, rotate it 90 degrees about the y axis so your looking at them side on (The x axis and time axis have now swapped round). Then take slices through this. You have now time sliced!

Illustration of rotating the "image stack"

Hold onto your hats because heres a video of this in action:

What can this be used for you may be asking? Well if you run a horizontal sobel filter over the individual time sliced frames you have a “rate-of-change-with-time”  for each pixel.

Now take a feed from a camera. Stick the incoming frames onto a queue of a set size. time slice these frames. Do a sobel filter on them and then spin em back to normal and compress all the frames into a single frame (do the average or just take one of them or something). Then use this single frame as an alpha channel mask. This will give you a sort-of-temporally-adjusting-background-subtraction-type-thing.