Not related to 3D work but image recognition, so off topic. Present image recognition tends to work by finding similar shapes or following edge contrast. Stick a camera on a computer and it'll capture a colour value for each pixel. From that it can try to guess what it's looking at by programming shape recognition etc. In the example picture it'll see the same level of grey (120,120,120) for the dark squares in the light and white squares in the shadow, and not be able to recognise there's a difference. A human viewer doesn't see each pixel of the image, but an overall image. The onlooker sees the green cylinder, the apparent shadow, the pattern of squares, and concludes it's a 3D scene with checkerboard in shadow, so adjusts the perception of the colours accordingly. To us it's easy to pick out a dark square in the light or a white square in the shadow, but a computer would need some complex processing rather than just comparing pixel values. Looking for pixel values of (120,120,120) isn't going to find and distinguish between light and dark checkers.
David Coombes [EMAIL PROTECTED] ... > ... > ... > ... What ? : > > Now you nit-wits are actually talking-in-tongues ! > > Could you please take 5 minutes to attempt to formulate > in an email exactly what it is you mean to say so that the > rest of us might understand ... please ? > > TIA > > Garry Curtis > http://www.niagara.com/~studio
