Steve Baker wrote:
|  
| ...and MIPmapping with GL_LINEAR_MIPMAP_LINEAR can require 8 texels
| to be accessed per pixel. ...

And some forms of anisotropic filtering require even more.  I just
thought I'd keep the discussion simple by offering only one example. :-)

|                       ... However, the RAM bandwidth isn't as bad
| as that because the rendering chip will probably have a texel cache
| that will make the overhead look more like 2x than 4x or 8x

This, too, is complicated -- depending on whether you're mipmapping,
how LOD is computed, whether the hardware designer intends to
guarantee a fill rate, etc.  But the bottom line is still the same;
you typically need to fetch more data from texture memory than from
the framebuffer, so people have traditionally designed separate
texture memories with optimization for different access patterns than
the framebuffer.

My goal here was just to make sure everyone understood the reasons why
things are designed in a particular way.

| >         In addition to pixels, the framebuffer often contains a Z
| >         buffer stencil buffer, etc.  These things have to be arranged
| >         in memory so that they can be accessed quickly during
| >         rendering.  Texture memory wouldn't normally support these
| >         things unless you make the design decision that rendering to
| >         texture is critically important.
|  
| That's the part that makes rendering-to-texture hard (conceptually) to
| build into a clean API.

The difference between color formats supported by textures and by
windows worries me at least as much.  Consider rendering to an S3TC
compressed texture.  :-)

| All these things explain why some devices can't render to texture
| at all. However, others *can* do so - and an extension that exposed
| that functionality would be useful.  Even on devices that can't
| do that, there may be a fast path from frame buffer to texture
| memory that can be exploited.

Yes.  Explaining the background information helps everyone understand
why the feature isn't universal, though, and why it may have radically
different performance characteristics even across machines that
support it.

| Rendering to unused areas of the frame buffer is another valuable
| trick that could be exposed on some chips.

Absolutely.  

|                                             ... The chip must
| be *able* to render there - it's just a matter of getting past
| the API to let you do so.

That's what PBuffers and FBConfigs are all about.  Possibly they could
be used as a model for a render-to-texture extension.

| >         You'd need to figure out how to manage the Z buffer and other
| >         ancillary buffers that might be associated with a texture that
| >         you're using as a target for rendering.  Would you re-use the
| >         Z buffer associated with the window, or allocate a new one?
|  
| You certainly couldn't re-use the main Z buffer because you don't
| know if the application might want continue to use it.

I wouldn't rule it out instantly -- after all, rendering to auxiliary
buffers involves sharing the main Z buffer, and it's still useful in
many cases.

| You might decide that there is no Z buffer possible for those kinds
| of context.

Yes, though if the hardware is capable of rendering to texture while
using Z buffering, it would be nice to support it.  I image that would
be a common case, for example, when rendering an environment map.

| >         You'd need to figure out which odd corner-cases could arise
| >         and how you want to handle them.  What if you use a texture
| >         while you're rendering to it?
| 
| You should (at the API level) simply decribe that as "undefined
| behaviour".

Would that be desirable if the user planned to store multiple texture
images inside a single large texture, perhaps to reduce texture
binding costs?

| >         What happens if another thread
| >         deletes a texture while you're rendering to it?
| 
| That's illegal anyway.  What happens if another thread deletes
| a texture while you are rendering *with* it?

It's legal.  Textures are refcounted, and the texture object will
survive until the last thread using it unbinds it.  The same solution
would work for rendering to the texture; you'd just need some
well-defined synchronization point at which the OpenGL implementation
can figure out that the texture is no longer in use.  Perhaps at the
next glXMakeCurrent, for example.

As with the other comments, I just wanted everyone to be aware of the
sort of problem that would need to be solved.

| >         What if you
| >         need to load a new texture into texture memory, but can't do
| >         so because the one you're using as a rendering target has
| >         caused texture memory to be filled or fragmented?  (And what
| >         implications does that have for proxy textures, which are
| >         supposed to give you an ironclad guarantee about whether a
| >         texture can be loaded or not?)
| 
| No - they don't guarantee that there will be enough texture memory
| to make the texture be resident in hardware texture memory. The
| call only guarantees that if texture memory were currently empty
| that your texture could be accomodated.

In ordinary use, if a proxy query indicates that a texture can be made
resident, then that *does* guarantee that you can successfully load
the texture at any time in the future.  In fact, that's one of the
main reasons proxy queries exist; they're the only reliable way to
determine if you can actually use a given texture.

This is possible because binding a new texture will throw out any
other textures already in texture memory, if necessary.  My point was
that render-to-texture might introduce a new semantic that ``locks'' a
texture into memory, thus violating the conditions assumed by the
proxy query.

If you choose to make an extension that eliminates the proxy query
guarantees, then I believe every OpenGL program must be modified to
check *every* texture load and bind for failure, and fall back to a
rescaled texture (or one with fewer mipmap levels).  It's worth
thinking about this carefully...

Allen


_______________________________________________
Mesa-dev maillist  -  [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev

Reply via email to