I've checked in some changes for Mesa 3.3 so that it can support different sizes of depth buffers at runtime. Previously you had to recompile for 16 or 32bpp depth values. That was bad for the new hardware device drivers. Now you can choose any depth from 0 to 32 bits. For software rendering, if depthBits <= 16 GLushorts are used for the depth buffer, else GLuints are used. The upper most significant bits of each value are left at zero when depthBits < 16 or < 32. The depthbuffer span read/write functions in the device driver interface always read/write 32-bit values. If the hardware depth buffer is 16bpp you'll need to convert. It's a small performance hit but that's a software-fallback path anyway. I've implemented this in the 3dfx driver. Mesa's pseudo glXChooseVisual() function has been modified a bit for the GLX_DEPTH_SIZE option. The OpenGL spec says that if you pass GLX_DEPTH_SIZE=1 (which is what GLUT and most apps do) then OpenGL should use the deepest depth buffer available. If I'd have done that then you'd get a 32bpp depth buffer. A 32bpp buffer is considerably slower than 16bpp. Rather than impose this performance hit on everyone I changed glXChooseVisual to use 16bpp if GLX_DEPTH_SIZE=1 is passed in. If you really want a 24bpp buffer just pass GLX_DEPTH_SIZE=24, for example. Since there's other ways in which Mesa's pseudo GLX varies from the spec I don't think this is too big of deal. Apps which really care about visuals and their performance probably shouldn't use glXChooseVisual anway. Finally, there is an overflow problem with 32-bit depth buffers when software rendering (fragment Z interpolation). I haven't studied it yet but it can probably be solved. Until then, a 31-bit depth buffer is the practical limit (and imposed by glXChooseVisual). Just for fun I also tested weird values like 17bpp, 9bpp and 3bpp. They work. -Brian _______________________________________________ Mesa-dev maillist - [EMAIL PROTECTED] http://lists.mesa3d.org/mailman/listinfo/mesa-dev
