On Mon, Sep 8, 2008 at 4:04 PM, Roland Scheidegger
<[EMAIL PROTECTED]> wrote:
> On 07.09.2008 21:35, Younes Manton wrote:
>>>> Samplers could be allowed to hold texture format info, thereby
>>>> allowing on the fly format switching. On Nvidia the texture format is
>>>> a property of the sampler, so it's possible to read a texture as one
>>>> format in one instance and another format in another instance.
>>>> Likewise a render target's format is emitted when it is set as a
>>>> target, so a format attached to pipe_framebuffer_state, or a new state
>>>> object analogous to a sampler (e.g. an emitter) would be very handy.
>>>> The format at creation time could be kept for hardware that can't do
>>>> this, then it's just a matter of checking/requiring that format at use
>>>> time matches format at creation time and signaling an error otherwise.
>>>> This is to get around HW limitations on render targets, so we render
>>>> to a texture in one format, and read from it in another format during
>>>> the next pass.
>>> Note that presently a) gallium texture format/layout/etc can't be
>>> changed once created, b) format is a texture property, not of the
>>> sampling/rendering operation. Changing a) seems impossible, especially
>>> considering we are moving to immutable state objects, which are much
>>> simpler and effictive to handle, rather than mutable state objects. If
>>> I understood correctly, you're asking to change b) in order to get
>>> around hw limitations.
>>>
>>> My first impression is that HW limitations should not be exposed in
>>> this way to the state tracker -- it is ok for a driver which lacks
>>> complete hw support for a operation to support it by breaking down in
>>> simpler supported operations, but that should be an implementation
>>> detail that should be hidden from the state tracker. That is, nvidia
>>> driver should have the ability to internally override texture formats
>>> when rendering/sampling. If the hardware limitation and the way to
>>> overcome is traversal to many devices, then we usually make that code
>>> a library which is used *inside* the pipe driver, keeping the
>>> state-tracker <-> pipe driver interface lean.
>>>
>>> But I am imagining the 3d state trackers here, perhaps video state
>>> trackers needs to be a step further aware to be useful. Could you give
>>> a concrete example of where and how this would be useful?
>>
>> The problem we have is that render target formats are very limited.
>> The input to the IDCT stage of the decoding pipeline is 12-bit signed
>> elements, the output is 9-bit signed elements, which then becomes the
>> input to the MOCOMP stage. We have R16Snorm textures, so we can
>> consume the 12-bit and 9-bit signed inputs well, but we can't render
>> to R16Snorm, or even to R16Unorm. The closest thing we have is
>> R8Unorm, which would be acceptable since we can lose the LSB and bias
>> the result to the unsigned range, but not enough HW supports that.
>> However, if you think of R8G8B8A8 as being 4 packed elements, we can
>> render to that instead and every card supports that just fine.
>> However, in order to consume that in the MOCOMP pass we need to
>> reinterpret it as an R8Unorm texture. So, as you can see we need a
>> surface to behave as a R8G8B8A8 (W/4)xH render target for pass A, then
>> as an R8 WxH texture for pass B. We could also consider R8G8B8A8 as
>> two elements and output 2 full 9-bit elements. Either way, we need
>> some sort of dynamic pixel format typing.
>>
>> It would be very difficult to do this transparently behind the scenes,
>> since the fragment shader code needs to be aware of the differences.
>> The Nvidia hardware seems to support it perfectly, since the pixel
>> format of a texture or render target is emitted when it is bound,
>> along with min/mag filter, wrap mode, etc; a buffer is just a buffer
>> of generic memory otherwise. I don't know much about other hardware,
>> but I wouldn't be surprised if Nvidia wasn't the only one that worked
>> like this. If this is the case, then one could argue that static pixel
>> formats are an artificial restriction, and that it would make more
>> sense for a low level API to better model how the hardware worked. But
>> I think keeping the format as part of the texture like it is now, so
>> that for hardware that didn't support this sort of thing the driver
>> could check that format specified in the sampler or render target
>> state matched the format of the texture at creation time is a good way
>> to satisfy both sides of the equation.
>>
>> It would probably be better to experiment with this privately and see
>> how it worked out if people are not currently convinced about this,
>> because for all I know there could be some hardware quirk that makes
>> this impossible or not worth using, but I just thought to mention it
>> in case someone had already considered this.
>
> Dynamic format typing indeed sounds useful in some scenarios, though I'm
> not sure how this could be exposed in a truly generic way. You also need
> to consider that while you can indeed just change the format when doing
> for instance texture sampling, it might not work in all cases since the
> memory layout of the buffer might need to change (as an example, imagine
> hardware which needs a texture pitch of 4 pixels, so if you'd have a
> 20-pixel wide a8 texture, reinterpretation as r8g8b8a8 would need
> padding now).
>

Well you are right, but we could introduce a request for surface
compatibility (or even a surface conversion request) that the driver
could implement as it wants (either as a noop or as a format
conversion copy, possibly with very little overhead). This is going to
be useful for all GPGPU stuff...

Stephane

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to