Hello people.

I am looking for an algorithm to reduce a screen image to a very small size, 
e.g. 3x2 pixels width x height so as each pixel is the average colour (RGB) of 
the corresponding screen area. For the given size 3x2, the screen would be 
divided into 3 columns and two rows, which means a 1920x1080 display would be 
comprised of 6 areas of 640x360 pixels like this:

    1   2   3
  +---+---+---+
4 | o | o | o | 5
  +---+---+---+
6 | o |   | o | 7
  +---+---+---+

Stripes 1 to 7 correspond to areas with an o sign.

I am planning to build an ambi-light system with LED stripes I bought at IKEA; 
3 stripes on top and 2 stripes aside of the monitor. Each stripe would reflect 
the average colour of its own area on the screen, hence the algorithm.

There would be around 10-15 computation per second. The project is made of an 
external USB module that drives the LED stripes. It should receive only an 
array of values (5 RGB values, exactly). The image reduction should take place 
either in a screen grabbing daemon or in a modified VNC server, I haven't 
evaluated yet which one is best for that. In short the code will run on a Linux 
machine. Mine for short, it's a Gentoo 64-bit kernel 3.4 that runs on Xorg.

There are Arduino kits for the hardware part so I'm focusing on the software 
part right. Most algorithms I saw are screen grabbing and involve nested loops 
for each pixel and I'm sure there's a better way. (E.g. MMX instructions?) I 
thought of JPEG reduction since it corresponds to what I need if you keep only 
the average from the spectrum. So my question is: can the JPEG compression 
algorithm be effectively used and optimized to get that to work? Or if you have 
better ideas or hints, I'm open.

Thanks a lot in advance.
_______________________________________________
gimp-developer-list mailing list
[email protected]
https://mail.gnome.org/mailman/listinfo/gimp-developer-list

Reply via email to