On Sunday 27 November 2005 16:49, Philipp Klaus Krause wrote:
> Nicolai Haehnle schrieb:
> 
> > That's usually a bad idea because it breaks parallelism between CPU and 
GPU. 
> 
> It only breaks parallelism between the texture upload and the
> application's rendering thread. Any other thread in any process can
> still run at the same time.

Working under the assumption that switching the GPU context is Horribly 
Slow(tm), we could switch CPU threads, but we wouldn't want to switch GPU 
threads, and then there's still a period of time where the GPU remains 
inactive because events would go basically like this:

CPU Thread A calls glTexImage2d
Driver emits upload command and blocks Thread A; no more commands are sent 
to the GPU
CPU Thread B executes
...
GPU finished the DMA, causes an IRQ and *becomes idle*
Driver wakes up Thread A
... (depending on the scheduler, CPU Thread B may still be executing here)
CPU Thread A actually begins executing again
CPU Thread A calls some other rendering command
Driver emits rendering commands to GPU
GPU starts working again

Note that there's a rather large hole of activity on the GPU side.

Yes, in theory this hole could be filled out by doing work for a different 
OpenGL context, but
a) despite all the interactivity in modern desktops, it's unlikely that many 
applications are really rendering at the same time, and
b) it would be painfully slow *anyway* unless the hardware magically 
supports fast rendering context switches (which I highly doubt it will)

The "mark pages read-only" trick would work, but I have no idea if that's 
feasible (especially when the source data for the texture image is in 
shared memory).

cu,
Nicolai

Attachment: pgpKEX8fNbZoh.pgp
Description: PGP signature

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to