Lance is making progress on the YUV->RGB conversion logic.  My general
question to the list is with regard to back-conversion.

OGA is designed to deal exclusively with 32-bit ARGB pixels.  But we
would like to support YUV formats as well.  It seems to me that the
best place to put the conversion is in the host interface when the
data comes into the chip.  As you write (or DMA) YUV data into the
graphics memory, it gets converted automatically to RGB before being
stored.  Thus, when you want to scale it into a window, the GPU is
working only on RGB data, as is the video output.

My question is:  Is there any need to convert back from RGB to YUV?
We're not doing video capture.  It seems silly to read back any YUV
data that you just wrote.  And if you draw to a surface that was
originally a YUV upload, what are the chances you'll ever care to
download it back to host memory?

The way I conceive it, YUV->RGB converstion is ONLY for video
playback, not textures or drawing surfaces.  You can use it for
textures, but memory management software needs to understand that a
conversion was done.

One thing to note is that we're finding that we can't convert YUV->RGB
and then RGB->YUV without data loss.  Actually, we can, but the
hardware required is unjustifiable, and it also requires us to
actually 'alter' the accuracy of the RGB data just so that we can
convert it back.  (That is, either we convert it accurately but can't
convert back losslessly, or we convert it inaccurately so we can read
it back.  But we don't want to use the transistors for that anyhow.)

Comments?
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to