On Wed, 1 Sep 1999 [EMAIL PROTECTED] wrote:
> First, one might well ask why texture memory is treated differently
> from framebuffer memory in the first place. Why not have just one
> chunk of memory that you can use for both texture and rendering?
> Loosely, the answer is that you can often get better performance by
> using physically separate memories or organizing the data differently
> for different uses. Here are some examples:
>
> Bilinear filtering requires that you access four texels for
> each pixel you draw; therefore the bandwidth requirements for
> reading texture memory can be as much as four times higher
> than those for writing pixels in the framebuffer.
...and MIPmapping with GL_LINEAR_MIPMAP_LINEAR can require 8 texels
to be accessed per pixel. However, the RAM bandwidth isn't as bad
as that because the rendering chip will probably have a texel cache
that will make the overhead look more like 2x than 4x or 8x
> In addition to pixels, the framebuffer often contains a Z
> buffer stencil buffer, etc. These things have to be arranged
> in memory so that they can be accessed quickly during
> rendering. Texture memory wouldn't normally support these
> things unless you make the design decision that rendering to
> texture is critically important.
That's the part that makes rendering-to-texture hard (conceptually) to
build into a clean API.
> Historically, hardware designers have done things like organizing
> texture memory in multi-way interleaved fashion, rather than linearly,
> to get better performance.
All these things explain why some devices can't render to texture
at all. However, others *can* do so - and an extension that exposed
that functionality would be useful. Even on devices that can't
do that, there may be a fast path from frame buffer to texture
memory that can be exploited.
Rendering to unused areas of the frame buffer is another valuable
trick that could be exposed on some chips. If you have that then
render-to-texture can be emulated comfortably. eg If your Voodoo
card *can* run at 800x600 and the screen is actually set to 640x480,
how come you can't render to that wasted area? The chip must
be *able* to render there - it's just a matter of getting past
the API to let you do so.
> So let's assume that you've got hardware that can support rendering to
> texture memory. What changes would you need to make to the OpenGL API
> to support it? Here are some things that come to mind:
>
> You would need to expose a new concept for a drawing
> surface. Since OpenGL currently has notions of pixel format
> descriptors or X11 Visuals that are associated with windows,
> you'd need to extend that somehow to textures. New pixel
> formats probably would be needed. Changes to glXMakeCurrent
> (or its equivalent on other systems) would be needed.
Yes - this belongs outside the core OpenGL API.
> You'd need to figure out how to manage the Z buffer and other
> ancillary buffers that might be associated with a texture that
> you're using as a target for rendering. Would you re-use the
> Z buffer associated with the window, or allocate a new one?
You certainly couldn't re-use the main Z buffer because you don't
know if the application might want continue to use it.
You might decide that there is no Z buffer possible for those kinds
of context.
> You'd need to figure out which odd corner-cases could arise
> and how you want to handle them. What if you use a texture
> while you're rendering to it?
You should (at the API level) simply decribe that as "undefined
behaviour".
> What happens if another thread
> deletes a texture while you're rendering to it?
That's illegal anyway. What happens if another thread deletes
a texture while you are rendering *with* it?
> What if you
> need to load a new texture into texture memory, but can't do
> so because the one you're using as a rendering target has
> caused texture memory to be filled or fragmented? (And what
> implications does that have for proxy textures, which are
> supposed to give you an ironclad guarantee about whether a
> texture can be loaded or not?)
No - they don't guarantee that there will be enough texture memory
to make the texture be resident in hardware texture memory. The
call only guarantees that if texture memory were currently empty
that your texture could be accomodated.
Steve Baker (817)619-2657 (Vox/Vox-Mail)
Raytheon Systems Inc. (817)619-2466 (Fax)
Work: [EMAIL PROTECTED] http://www.hti.com
Home: [EMAIL PROTECTED] http://web2.airmail.net/sjbaker1
_______________________________________________
Mesa-dev maillist - [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev