On 15.04.11 08:00, David Anderson wrote:
> The client tries to use all GPUs, even if this overcommits the CPUs.
> The assumption is that this maximizes throughput.
> Is there evidence that it doesn't?

There are two types of GPU applications: such that run almost entirely on the 
GPU and just need a small fraction of a CPU core, and such that use the 
GPU mainly as a coprocessor for certain parts of the computation (e.g. FFT) and 
require a full CPU core. I'd say the assumption is true for the 
former, but not for the latter. So similar to the distinction made between 
these two types of Apps in assigning process priority, I'd do this in 
resource counting, too.

Best,
Bernd

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to