(Arguments with Allen are *so* informative...and also hard to win!)
On Wed, 1 Sep 1999 [EMAIL PROTECTED] wrote:
> | > You'd need to figure out which odd corner-cases could arise
> | > and how you want to handle them. What if you use a texture
> | > while you're rendering to it?
> |
> | You should (at the API level) simply decribe that as "undefined
> | behaviour".
>
> Would that be desirable if the user planned to store multiple texture
> images inside a single large texture, perhaps to reduce texture
> binding costs?
So, you would allow rendering to a texture at the same time as
rendering with that texture...but put some words in to allow
this so long as the area of the map you accessed didn't overlap
the area you were rendering with? That would be very hard to
explain properly - because in some texture modes, you could
inadvertently blend to a texel that was in the forbidden zone,
or the polygon that is using the texture could become so small that
roundoff error would cause texels that are being re-rendered to
be displayed.
Alternatively I suppose you could say that whatever texels were
within the re-rendering viewport/scissor box were undefined during
the rendering.
It's all too nasty. Just say "it's undefined" and you're covered.
> | > What if you
> | > need to load a new texture into texture memory, but can't do
> | > so because the one you're using as a rendering target has
> | > caused texture memory to be filled or fragmented? (And what
> | > implications does that have for proxy textures, which are
> | > supposed to give you an ironclad guarantee about whether a
> | > texture can be loaded or not?)
> |
> | No - they don't guarantee that there will be enough texture memory
> | to make the texture be resident in hardware texture memory. The
> | call only guarantees that if texture memory were currently empty
> | that your texture could be accomodated.
>
> In ordinary use, if a proxy query indicates that a texture can be made
> resident, then that *does* guarantee that you can successfully load
> the texture at any time in the future. In fact, that's one of the
> main reasons proxy queries exist; they're the only reliable way to
> determine if you can actually use a given texture.
What the RedBook says is (quoted verbatim fron the second edition):
"The texture proxy tells you if there is space for your texture, but
only if all texture resources are available (in other words, if it's
the only texture in town). If other textures are using resources,
then the texture proxy query may respond affirmatively, but there may
not be enough space to make your texture resident (that is, part of a
possibly high-performance working set of textures)."
> This is possible because binding a new texture will throw out any
> other textures already in texture memory, if necessary.
Well, that's an assumption that you choose to make. It's not
guaranteed to be true (Think multitexture for example - all N
textures have to be resident at the same time - but the proxy
command didn't know that the other 4 textures you are using
in the multitexture setup were all 2048x2048 maps!). All proxy
tells you is that *IF* texture memory were free then your map
would fit. If it isn't all free (either due to the needs of
multitexture or due to some rendering that's going on in
texture memory) then proxy didn't say whether it would work or
not. This makes proxy textures less useful than you might hope
but doesn't mean that rendering-to-texture would break the
semantics of proxy textures.
> My point was
> that render-to-texture might introduce a new semantic that ``locks'' a
> texture into memory, thus violating the conditions assumed by the
> proxy query.
Yes - it'll certainly create a lock that makes any assumed guarantee
from the proxy texture invalid...I just feel like the proxy texture's
guarantee is rather weak anyway - and this isn't doing any noticable
harm to that.
> If you choose to make an extension that eliminates the proxy query
> guarantees, then I believe every OpenGL program must be modified to
> check *every* texture load and bind for failure, and fall back to a
> rescaled texture (or one with fewer mipmap levels). It's worth
> thinking about this carefully...
Indeed. Defining an off-screen rendering context and then a
fast copy-framebuffer-to-texture (as SGI hardware does) is a
lot safer semantically...but for hardware that can do it, rendering
directly into texture memory (for all the 'gotchas' it turns up)
is still a tempting goal.
Steve Baker (817)619-2657 (Vox/Vox-Mail)
Raytheon Systems Inc. (817)619-2466 (Fax)
Work: [EMAIL PROTECTED] http://www.hti.com
Home: [EMAIL PROTECTED] http://web2.airmail.net/sjbaker1
_______________________________________________
Mesa-dev maillist - [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev