I was looking at the source for the radeon driver and noticed that the depth buffer is always set to 32 bits if a 24 bpp color depth is selected. Is this a hardware limitation, or might it be possible to change it to 16 bpp?
There's two ways to do this. One would be to add an option to the X server to select the depth-buffer size. Look in programs/Xserver/hw/xfree86/drivers/ati/radeon_driver.c for how options (like "AGPSize") are handled. Basically, you'd add an entry like
{ OPTION_DEPTH_SIZE, "DepthBits", OPTV_INTEGER, {0}, FALSE },
to the RADEONOptions table. Then, if the value is non-zero, use that value in all the places that use the bit-size of the depth buffer. In this way you could use 24-bit Z-buffer w/16-bit color or 16-bit Z-buffer w/24-bit color.
The better way is going to take a LOT more work. Part of the work in the texmem-0-0-2 branch will be to make the depth-buffer dynamically allocated when the GL context is created. When this is done, we'll be able to dynamically mix-and-match depth-buffer sizes at run time. That is, we could have some windows with 16-bit depth-buffer, some with 24-bit, and some with 32-bit floating point depth-buffe all at the same time.
It will probably be a few months before we get that far, though.
-------------------------------------------------------
This SF.net email is sponsored by: Etnus, makers of TotalView, The debugger for complex code. Debugging C/C++ programs can leave you feeling lost and disoriented. TotalView can help you find your way. Available on major UNIX and Linux platforms. Try it free. www.etnus.com
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel
