Timothy Miller wrote:
On 9/12/06, Attila Kinali <[EMAIL PROTECTED]> wrote:
On Sat, 9 Sep 2006 12:38:17 -0400
"Timothy Miller" <[EMAIL PROTECTED]> wrote:
> It has the advantage of being able to, for zero cost, apply alpha
> blending, rops, and planemasks directly to image uploads.
> Essentially, a graphics memory write can be diverted through the
> drawing engine where it becomes a fragment and can therefore have any
> drawing engine feature applied to it. (There are limits for cases
> where the fragment has an address but not coordinates.) The drawback
> is that writes have a huge latency... if you ever want to read the
> word back, you have to know what you did and flush the engine pipeline
> before you try to read it back.
Assume for now, that we do not provide any read back to host memory.
It makes our life much simpler :)
Not host memory. Graphics memory.
You drop a YUV into OGA. It's in convert-YUV-to-RGB mode, do it
diverts the word though the drawing engine.
Now, the host decides it wants to read the address in graphics memory
where it had tried to write the YUV value (knowing, of course, that it
would get an RGB back instead). The host would have to wait until the
YUV had made its way all the way through the drawing engine. The
synchronization/coherency for this particular path is not automatic.
This is true because it's true about all GPU activity. But we're used
to that. If you tell the GPU to do a bitblt, it's going to run in
parallel with the CPU, and we know that if we read part of the
graphics memory before the bitblt is finished, we're going to get
stale data.
What's odd here is that a graphics memory write is being implicitly
converted into a GPU operation. We're not used to that implicit
conversion, so we have to be aware of that in this case.
> We can move the YUV/RGB logic into the engine where we can send YUYV
> for one scanline, then change modes (just drop a configuration write
> down the pipeline) and provide an offset where we provide YYYY and
> have the GPU read memory from an appropriate offset back from where
> we're writing. You could alternate YUYV and YYYY, or you could do all
> YUYV at once and then interleave the YYYY in there.
>
> In this case, since we're storing as RGB, we'd have to sneak U and V
> into the alpha channel bits of image being uploaded. So alternating
> pixels would be stored as URGB and then VRGB. If you want to apply an
> alpha blend to the video data when it's being composited onto the
> screen, you can provide it as a constant in the texture unit.
I would not mix in YUYV, that format is hardly used anymore
and IMHO makes our life harder than simpler. You'd have to be
very carefull in associating the correct pixel with the correct
converter to get the righ pixel values in the end.
YUYV is, you might say, an intermediate format for even (odd?)
scanlines when doing the conversion from YUV where every pixel has Y,
but 2x2 blocks of pixels have U and V to share between them. I don't
know the meanings of 4:4:4 or 4:2:2 or 4:2:0. I can never remember
them. But one of them is what I described, and it's what someone on
the list said we'd have to deal with.
4:4:4 means that there is chroma for every pixel and it alternates
between Cr & Cb.
The alternating 2x2 blocks of chroma is 4:2:0.
This is in the Wikipedia:
http://en.wikipedia.org/wiki/Chroma_subsampling
I can't remember it either and refer to that page a lot. :-)
--
JRT
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)