On Sunday 13 March 2005 17:28, Jan Knutar wrote:
> On Sunday 13 March 2005 02:25, Daniel Phillips wrote:
> > It should be pretty easy to make that decision now.  The hardware
> > budget doesn't leave any room for extras that can already be
> > accomplished with the existing 3D pipeline.  MPlayer et al know how
> > to convert YUV, so the card doesn't need to.  Host CPU overhead is
> > unlikely to be an issue here.
>
> Time for me 2eurocents..
>
> Hardware RGB->YUV is, imho, a must for the card to be usable in, for
> example, a home entertainment system.

I'm pretty sue it is in the spec, because without it there will be no TV 
out, and TV out is in the spec.

> It's already difficult to find 
> a quiet CPU that can power that without having MC in hardware, let
> alone without having colourspace transform in hardware. It's not only
> CPU usage, also double bandwidth (if outputting to RGB24) over the
> bus.

Wait, I don't see the double bandwidth.  As I see it, the video will get 
onto the card via DMA into the texture memory, from there it will pass 
through the 3D pipeline, be written into the framebuffer memory, and 
read from there to the DAC.  Only one trip over the bus.  Two trips to 
video memory and two trips from video memory, but the 6.4 GBytes/sec 
video memory bandwidth should be able to handle that nicely.

> A few figures from my computer for DVD:
>
> Accelerated colourtransform: 35% CPU
>
> Software colourtransform and no scaling.
> Scaling is software in this mode, but I did
> not disabled scaling in attempt to only measure
> transform: 75% CPU

Sorry, I'm having trouble figuring out what you actually measured.  You 
run at 75% CPU with software scaling and software YUV->RGB?  But the 
card will offer hardware scaling (bilinear scaling of a texture, which 
gets onto the card by DMA).

> Output to mga400 HW GL: ~400% CPU
> However, if the video is ~QCIF resolution, CPU usage drops enough
> to make it watchable, and maximizing the window does not noticeably
> increase CPU usage, thanks to the HW scaling.
>
> Did I mention that I dislike QCIF resolution?
>
> I would really hope for YUV->RGB transform in hardware.

First, you need to show the cost of the software YUV->RGA conversion 
more precisely, and second, somebody needs to provide source code for 
YUV->RGB conversion so we can see how much hardware it needs.

Somebody else posted that YUV->RGB costs 18% on their AthlonMP 2800+, 
which is not a high end CPU these days, and that's not a painful amount 
of CPU.  Even here, the question is, what exactly did they measure?  
The 18% mentioned was for DGA, which sounds to me like no hardware 
scaling.  So how much CPU would be used for software YUV->RGB, but with 
hardware scaling, which we have?  And is that acceptable?

The case for YUV->RGB hardware has to be made a _lot_ more clearly.  Can 
somebody state this in terms of algorithms and data paths please, 
instead of just anecdotes?

Also, why does the video even have to be converted YUV->RGB and back 
again?  Why not make the color model a property of the window and have 
the video controller skip the RBG->YUV conversion if the window is 
already in YUV?

Regards,

Daniel
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to