On Thu, 17 Feb 2005 01:17:47 +0800
"HiNet" <[EMAIL PROTECTED]> wrote:

>     Just as Nicolai said, everything can be done with 3D hardware
> function of graphics cards. And if I am correct, only ATI's graphic
> chips can support both 3D and colorkey in the same time. Hence,
> we can do some OSD by using the feature of colorkey if we use
> ATI's card. However, ATI does not support 16 bit color mode in
> their close source driver for linux. Besides, some uncompressed
> video streams are in YUV space. 

Actualy, YUV is the most used colorspace for video applications.
One reason is because it decorelates the three colour planes
quite efficiently and the other is that you can use
supsampling because of this decorelation.
In the extreme case with 4:2:0 you effectively half
the amount of raw data per frame.
That's why a converter is a must have, but i don't think
that's too difficult, it's iirc 9 multipliers with
constant multipicants and 6 adders. Shoudln't need too
many gates.


> (Just imagine 16 video streams in
> a DVR system. 480 frames/sec <== such a high frame rate, gpu's
> color space conversion enough??)

Plenty. Especialy as this can be easily pipelined.

> 
>      On a DVR system, software compression make system load very heavy. In
> such a condition, the rest CPU power is very limited but still need to 
> handle
> real time monitoring. Of course, video decoders (video ADC chip) can 
> directly
> write data to the video ram of a graphic card through PCI bus. But image 
> size is
> at most 640x480 while in NTSC( A little bigger if in PAL). And such
> video decoders (video ADC chip) can only scale down.

This is a graphics card, we don't support anything for video
compression. Please keep in mind that video decompression
is limited by the memory bandwith of RAM and to the graphics card
in todays DCT based systems. It's CPU limited in wavelet based
codecs.
Video compression is mostly limited by the time needed for
motion compensation. 
Ie the graphics card can only help with saving bandwith
when supporting some short paths to the video memory
when doing decompression. It's not possible to help
with compression.



>      On the other hand, generally Graphics cards' Video Overlay function can 
> be
> used only on one video stream. That's the design of the hardware.

Wrong. It depends on how many scaler units you have.
But i doubt that we'll get more than one. This project will
be pretty much transistor/gate limited.

> Hence,  for more than one video streams, such a feature is not sufficient.
> Besides the scaling problems for several video streams,  the composition
> of several video images on the screen would be painful if I want to
> do these on the off-screen before showing the composition result using
> Video Overlay.

No other way if only one scaler is available.


> 
>       And X window use share memory to store the original
> unscaled raw image. This is another headache. While frame rate gets
> higher, or there are more than one video streams when using nvidia's
> Video Blitter function, the CPU usage of X gets higher. Why? Shared
> memory is not physically continuous and this is definitely one of the 
> reasons. 

Can't be helped. Higher order allocations for are a pain, even if you
just need a few pages. 
And they will always fail if you need the MBs
for a video DMA buffer.

The better aproach is it to support scatter/gather in
the DMA unit. This shouldnt be to hard to implement and
fairly easy on the transistor count.
(IMHO needed anyways to have a usefull DMA unit)

>I've
> ever dreamed, some day there would be no X while using Video 
> Overlay/Blitter.
> (ps. nvidia's Video Blitter can blit more than 16 video streams at the same 
> time.

As already mentioned: already possible.
Note that as long as you have a driver, you don't need X to
run to access the card.

> But this blitter only affects traditional video ram==>RGB. Hence, there is 
> no colorkey.
> Most of all, Nvidia can't set us free.)
> 
>      So as you can see there's no much choice. The only thing I can do on a
> DVR system would be to patch some(huge) kernel code to let the monitoring
> process (including X) to have cpu resource in exact timing if I want to show 
> several
> video streams in flawlessly real time.

No, not needed to. Just get the scheduler to do it's job right.
But you'd be i/o limited anyways. Getting out two video uncompressed
video streams over PCI is ..  well.. quite at the limit.


                                Attila Kinali

-- 
éãåããéãåã
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to