Re: [boinc_dev] #GPUs #cores

2011-04-18 Thread David Anderson
Bernd:
Please verify this experimentally; I'm not convinced.
-- David

On 15-Apr-2011 1:12 AM, Bernd Machenschalk wrote:
 On 15.04.11 08:00, David Anderson wrote:
 The client tries to use all GPUs, even if this overcommits the CPUs.
 The assumption is that this maximizes throughput.
 Is there evidence that it doesn't?

 There are two types of GPU applications: such that run almost entirely on the 
 GPU and just need a small fraction of a CPU core, and such that use the
 GPU mainly as a coprocessor for certain parts of the computation (e.g. FFT) 
 and require a full CPU core. I'd say the assumption is true for the
 former, but not for the latter. So similar to the distinction made between 
 these two types of Apps in assigning process priority, I'd do this in
 resource counting, too.

 Best,
 Bernd

 ___
 boinc_dev mailing list
 boinc_dev@ssl.berkeley.edu
 http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
 To unsubscribe, visit the above URL and
 (near bottom of page) enter your email address.
___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.


Re: [boinc_dev] #GPUs #cores

2011-04-15 Thread David Anderson
The client tries to use all GPUs, even if this overcommits the CPUs.
The assumption is that this maximizes throughput.
Is there evidence that it doesn't?
-- David

On 12-Apr-2011 4:51 AM, Bernd Machenschalk wrote:
 Hi!

 We're experimenting with running BOINC on a cluster of GPU nodes. Our
 application takes a full core per NVidia GPU (avg_ncpus = 1.0). The BOINC
 Client is told to use only one CPU core (for now), i.e.ncpus1/ncpus  in
 cc_config.xml.

 However the Client starts as many tasks as there are GPUs on that node. When
 scheduling GPU tasks, does the Client ignore the number of available cores,
 expecting that there will always be more cores than GPUs? If so I'd consider
 this a bug.

 Best, Bernd

 ___ boinc_dev mailing list
 boinc_dev@ssl.berkeley.edu
 http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe,
 visit the above URL and (near bottom of page) enter your email address.
___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.


[boinc_dev] #GPUs #cores

2011-04-12 Thread Bernd Machenschalk
Hi!

We're experimenting with running BOINC on a cluster of GPU nodes. Our 
application takes a full core per NVidia GPU (avg_ncpus = 1.0). The BOINC 
Client 
is told to use only one CPU core (for now), i.e. ncpus1/ncpus in 
cc_config.xml.

However the Client starts as many tasks as there are GPUs on that node. When 
scheduling GPU tasks, does the Client ignore the number of available 
cores, expecting that there will always be more cores than GPUs? If so I'd 
consider this a bug.

Best,
Bernd

___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.