On Sunday 22 April 2007 14:12, Rogelio Serrano wrote:
>
> the main point is i want to avoid copying window data from system
> memory to the graphics card 30 times a second. so the graphics card
> can render the data. i want the graphics card to read the data in
> system memory one by one and immediately start emitting a video
> signal. i dont want the graphics processor to wait when the copy is
> done and then start reading the data in its local memory and then
> start emitting video signals. imagine decoding video to a buffer in
> system memory and then dma that to the graphics card 30 times a
> second.

You seem to be going backwards: you're rendering the desktop using the 
CPU, and then copy the result to the video card. Why would you? Why not 
have the GPU render the desktop directly into graphics memory? Then all 
you need is to send a couple of rendering commands whenever something 
changes, leaving the CPU free to do other things. There will be only 
one representation of the screen's contents, on the video card where it 
belongs.

Lourens

Attachment: pgpWpPtLEwfzX.pgp
Description: PGP signature

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to