On Monday 14 March 2005 06:22, Daniel Phillips wrote:
> On Sunday 13 March 2005 17:28, Jan Knutar wrote:
> >
> > It's already difficult to find
> > a quiet CPU that can power that without having MC in hardware, let
> > alone without having colourspace transform in hardware. It's not only
> > CPU usage, also double bandwidth (if outputting to RGB24) over the
> > bus.
>
> Wait, I don't see the double bandwidth.  As I see it, the video will get
> onto the card via DMA into the texture memory, from there it will pass
> through the 3D pipeline, be written into the framebuffer memory, and
> read from there to the DAC.  Only one trip over the bus.  Two trips to
> video memory and two trips from video memory, but the 6.4 GBytes/sec
> video memory bandwidth should be able to handle that nicely.

But RGB24 is 24 bits per pixel, while the YUV formats are 12 or 16 bpp. So 
it's two thirds or even half the bits for the same amount of pixels.

> > Output to mga400 HW GL: ~400% CPU
> > However, if the video is ~QCIF resolution, CPU usage drops enough
> > to make it watchable, and maximizing the window does not noticeably
> > increase CPU usage, thanks to the HW scaling.
> >
> > Did I mention that I dislike QCIF resolution?
> >
> > I would really hope for YUV->RGB transform in hardware.
>
> First, you need to show the cost of the software YUV->RGA conversion
> more precisely, and second, somebody needs to provide source code for
> YUV->RGB conversion so we can see how much hardware it needs.
>
> Somebody else posted that YUV->RGB costs 18% on their AthlonMP 2800+,
> which is not a high end CPU these days,

Well, if you are building a PCVR with a VIA ITX board and a 1 GHz VIA Eden 
processor then you have less CPU power than that. It has already been 
mentioned a few times that Linux-PCVRs would be a market for this.

> The case for YUV->RGB hardware has to be made a _lot_ more clearly.  Can
> somebody state this in terms of algorithms and data paths please,
> instead of just anecdotes?

http://www.fourcc.org/fccyvrgb.php

At least that's the algorithms side. Planar YUV is obviously a pain and should 
be avoided.

> Also, why does the video even have to be converted YUV->RGB and back
> again?  Why not make the color model a property of the window and have
> the video controller skip the RBG->YUV conversion if the window is
> already in YUV?

That is interesting, but we'd need to have a YUV format that at least is 
packed and has the same amount of bits per pixel as the current framebuffer. 
Also, does the modulator for TV-out support YUV in?

Lourens
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to