Thomas Kumlehn wrote:
Hi Hugh - What exactly makes the Left/Right to Front/Back or Color/Weight compositing either scene- or application- specific ?
IMHO this composite could and should be done in hardware when possible
a) to avoid dependency on closed source applications
b) to avoid extra "render to L/R textures (not fb)" and "render L/R to 
Front/Back" passes (I guess this needs a fragment shader ?)
c) to be faster

The only settings you can change in the iz3D DirectX driver, I'm aware of, are
1) the stereo base
2) the virtual screen plane (separating objects into "in front of" or "behind the 
screen plane")
and both only affect the creation of the Left/Right images.

Well a strong hint is that iZ3D themselves are doing a software
approach. Their whitepaper talks about a "software based image
control algorithm" and the requirements are a Windows PC, nVidia
dual head graphics card, and nVidia stereo driver with "embedded
iZ3D algorithm."

A quick review of stereo for anyone reading who isn't familiar
with the process: you have to present different images to the
left and right eye, ie render the scene twice from slightly
different viewpoints. If you look at any stereo image, even a
simple red-blue one, without the appropriate eyewear it looks
fuzzy because everything is drawn twice, some distance apart.
The two images have (almost) the same content, but the pixels
are moved spatially. For things closer to the viewpoint, the
pixels are further apart; for things in the distance the pixels
become closer and closer.

Shutter glasses separate the two images in time, L-R-L-R-...
with the glasses blanking out the opposite eye. Polarised (not
iZ3D) separates the two images with polarised light, showing
both images at once, overlaid onto the same area. In both these
forms of stereo the image is drawn twice into separate frame
buffers and any given pixel in any frame is only seen by one
eye. The application generating the image knows about stereo
and explicitly adjusts the viewpoint for L and R eye images.

iZ3D are claiming that they can generate polarised stereo
images from a single frame buffer, which means that each pixel
in the frame buffer can be seen by both eyes. So that pixel
becomes the equivalent of two pixels worth of information
from a stereo image.

Case #1: you have an existing 3D application which already
generates L and R eye stereo images. These have to be composited
together into the iZ3D single image. The problem is, exactly
which two source pixels contribute to each final iZ3D pixel?
The separation varies with distance, so if you know the
frustum offset used in the stereo projection and have access
to the 3D depth buffer and know the range of depth values
used by the 3D renderer, you can calculate the pixel distance
between the L and R eye images at any given pixel and match
them up. Yeah it's trigonometry, but the hardware isn't going
to be simple.

OK, maybe we can dump this problem onto the application and just
do compositing once we have the pixels?

Now suppose those two pixels are very different, say the L
eye pixel is bright red and the R eye is bright green. How
do you composite them? Do you adjust the weight for L and
R eye based on overall intensity or individual component
intensity, or mix the colors evenly? And whatever calculation
you choose, someone will have an image where it doesn't work.
I was involved in the stereo implementation for Visual Python
and "simple" red-blue stereo turned out to have all kinds of
special cases. You're going to have to implement a color
blend function (that has to produce two outputs, one to the
back RGB DVI output and one to the front polarisation DVI)
with user-adjustable parameters and arithmetic.

Case #2: you have a stereo movie or input from a stereo pair
of video cameras, without depth information. Well, you can
still make it work by, say, starting with L eye pixel position
in the R eye image and searching for a match in the vicinity.
It shouldn't be far away, and in the same scanline. Unless
the camera could have been tilted, in which case you might have
to search at an angle ... Again, not an easy piece of hardware
to build.

Now that I've worked through all this, I can't see how this
iZ3D is going to be a success at all! There are good reasons
why people keep using shutter glasses and dual head polarisation,
awkward and expensive though it may be.

The easiest way to use the iZ3D would be not to try and mix
pixels together at all, instead just interlace them vertically
(not horizontally and vertically as I wrongly suggested before)
with a fixed L-R-L-R pattern on the polarisation and fixed
L-0-L-0 and 0-R-0-R stencils for the frame buffer to separate
the left and right eye images.

--
        Hugh Fisher
        DCS, ANU
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to