This would be done by having 2 app versions, one for 4 CPUs
and one for 1 GPU and 1 CPU (or a fraction of a CPU).
Your app_plan() would be similar to the example code
except you'd need to tweak the "mt" case a little.

Mark Silberstein wrote:

> Assume one has 16 cores and 4 GPUs in one server. I want a WU to occupy up
> to four cores if GPU is not available and or 1 core + 1 GPU if the GPU is
> free. This would allow 5 WUs to be handled simultaneously.

7 or 8 by my count, depending on how much CPU the GPU app uses.

> 1. Is there any detailed doc/example on how to use it besides AppPlan in
> wiki?

See also http://boinc.berkeley.edu/trac/wiki/AppCoprocessor
> 2. Assuming 1 GPU is busy, will BOINC recognize it and invoke another
> GPU-enabled WU on another GPU? What does it pass as a command line param to
> the application to avoid GPU contention?

Each GPU app gets a cmdline arg "--device X" (X=0,1, ...)
telling it which GPU to use.

> 3. Assuming all GPUs are already busy, will BOINC recognize that sending WU
> to that host is still reasonable because there are still free CPUs?

In this case the client would ask the server for CPU work,
and it would get jobs that use the CPU app.

> 
> Thanks a lot,
> 
> Mark
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to