Some people on this list will undoubtably have really awesome ideas
for you on how you can use python to do this. Here are some ideas I've
had in exploring this concept in the past (I created a dual camera 3d
vision system a while ago)...

One idea, and probably the best of the bunch, is to use a physical
filter to limit the number of colors you receive. (Maybe you could do
this in software as well, my cameras had a software controllable hue
adjustment that allowed me to apply the filter without needing to use
a physical filter or pre-process the image to apply the filter)

Once you've got a range of values from black to whatever your filter
color is, you can then find the largest section of the frame whose
colors are far from black.

As a possibility, you may find it easier to process the image one axis
at a time. For example, assuming your image was only 8 pixels wide by
8 pixels high,
[
[0,0,0,0,0,0,0,0],
[0,0,0,2,2,0,0,0],
[0,0,2,7,7,2,0,0],
[0,2,5,9,9,4,2,0],
[0,0,2,7,7,2,0,0],
[0,0,0,2,2,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]
]

You could then average each column to find the x position which gives you

0, 0.25, 1.13, 3.38, 3.38, 1, 0.25, 0 meaning the colored item is at
location (zero based) 3 or 4 (or 3.5).

Then average each row to find the y axis which is
0, 0.5, 2.25, 3.88, 2.25, 0.5, 0, 0 meaning the colored item is at
location 3, giving you a coordinate of x=3.5, y=3.

I'll admit that going into my project I knew very little about Python,
so I probably could now do it much better. The biggest problem with my
system was that my camera, which could normally do 12 to 15 frames per
second could only process the stereo images at 1.5 - 2.5 frames per
second.

I say "camera could only process..." but I mean the software/camera combination.

I ended up spending a lot more time trying to make the motion fluid
from a 2 fps image stream. It ended up working very well though. :-)
It was a big learning curve project for me, since my cameras could
only work together in Linux (In Windows, the camera driver had a bug
allowing only one at a time), so I had to learn some kernel driver
hacking in order to enable the hue adjustment, then video4linux, then
image processing, then the graphics toolkit to display the images.

On 8/13/07, Jonathan Shao <[EMAIL PROTECTED]> wrote:
> I'm a relative newbie when it comes to image processing, so please bear with
> me...
>
> What I want to do is to set up a static camera such that it can track the
> motion of a person wearing a particular color marker walking around in an
> interior room (no windows). The "color marker" can be something like a
> t-shirt with a unique color distinctly different from the background, and I
> think I'll need to take periodic snap-shots of the room to simulate
> real-time tracking as closely as possible. In effect, it's a simplified
> scenario for person tracking.
>
> For now, I'm just trying to get the color detection part of my project
> working. I think I should be to just do a simple background subtraction
> between a snapshot by the camera and a reference image of the background,
> and that should be able to give me good color detection. The issues I'm
> concerned with:
>
> a) Do I need to worry about different hues/illuminations of the same color?
>
> b) Is this a realistic method to implement with the PIL, or would I have to
> deal with issues of speed while trying to track the marker?
>
> --
> "Perhaps we all give the best of our hearts uncritically, to those who
> hardly think about us in return."
> ~ T.H.White
> _______________________________________________
> Image-SIG maillist  -  Image-SIG@python.org
> http://mail.python.org/mailman/listinfo/image-sig
>
>


-- 
Matthew Nuzum
newz2000 on freenode
_______________________________________________
Image-SIG maillist  -  Image-SIG@python.org
http://mail.python.org/mailman/listinfo/image-sig

Reply via email to