On 11/10/2010 12:31 PM, Christian König wrote:
Am Mittwoch, den 10.11.2010, 17:24 +0100 schrieb Roland Scheidegger:
On 10.11.2010 15:56, Christian König wrote:
Am Montag, den 08.11.2010, 23:08 +0000 schrieb Andy Furniss:
Looking (for the first time) at iso13818-2 I think the chroma handling
would be part of display rather than decode, though the iso does specify
how chroma is laid out for fields in 6.1.1.8.

An article that describes the issues (it actually starts describing the
opposite problem of progressive treated as interlaced) is here.

http://www.hometheaterhifi.com/volume_8_2/dvd-benchmark-special-report-chroma-bug-4-2001.html

Thanks for the link. I understand the problem now, but can't figure out
how to solve it without support for interlaced textures in the gallium
driver. The hardware supports it, but neither gallium nor r600g have an
interface for this support, and I have no intentions to defining one.
I'm curious here, what the heck exactly is an interlaced texture? What
does the hw do with this?
It differs in the interpolation of samples. I will try to explain what I
need for video decoding with an little example, lets say we have a 4x4
texture:
A C B D
E F G H
I J K L
M N O P
>
And lets also say that the texture coordinates are in the range 0..3
(not normalized), so if you fetch the sample at coordinate (0,0) you get
"A", a fetch at (1, 0) gets "B", a fetch at (0,1) gets "E", and so on.

But if you fetch a sample from coordinate (0.5, 0) you get a linear
interpolation of "A" and "B" (depending on the sampler mode used).

The tricky part comes when you fetch a sample from coordinate (0, 0.5),
with a normal texture you would get a linear interpolation of "A" and
"E", a fetch from (0, 1.5) would result in an interpolation from "E" and
"I".

Now with an interlaced texture if we fetch from (0, 0.5) we get an
interpolation of "A" and "I", and when we fetch from (0, 1.5) we get an
interpolation of "E" and "M", and so on.

It even gets more tricky since the decision of what to use is made on a
block by block basis, so switching from one mode to the other must be
fast, so we just can't copy around the lines or do something like this.

I think it will probably end up in using more than on texture fetch in
the fragment shader and calculate the linear interpolation on our own.

If you have another god idea just let me know.

Here's an idea. I may be totally off base (this is off the top of my head) but suppose the interlaced texture is:

A B C D  (even line)
e f g h  (odd line)
I J K L  (even line)
m n o p  (odd line)

Couldn't you lie to the hardware and tell it that the texture is really 8x2 instead of 4x4. Then, the even lines would be in the left half of the texture and the odd lines would be in the right half:

A B C D e f g h
I J K L m n o p

Notice that the texture data layout in memory is the same in either case.

You'd have to tweak your texcoords to address the left(even) or right(odd) half of the image but I think bilinear simpling would do what you need.

I guess one problem would be bilinear sampling down the middle where the left and right halves meet.

-Brian
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to