Hi everyone,

I have a few questions related to c...@boinc:

1) Looking at client/app_start.cpp one can tell that setting "avg_ncpus" 
affects the priority of a CPU (host) task. I suppose that setting this value 
to "1" using a modified "cuda" plan_class will a) enable idle priority and b) 
occupy a whole CPU core, hence only a single task will be assigned to this 
core, correct? Example: a quad-core CPU would run only three CPU tasks (max.) 
and one GPU task (using the fourth core for the GPU host code) in such a 
setup, right?

2) Are there any plans to move the plan_class configuration from 
sched/sched_plan.cpp to the database for example? Storing the plan_class 
details (e.g. min versions, min. global mem) as well as the host_usage (e.g. 
number of CPUs) details in database tables (i.e. referenced by app_version) 
seems to be more natural than having these things hard-coded in the scheduler. 
This makes it much harder and error prone to release a new CUDA app version 
because you've to change the scheduler simultaneously...

3) What about signal handling in CUDA apps? How important is it to free 
allocated device memory when the host app is terminated/killed (SIGKILL five 
secs. after SIGTERM) by the BOINC client? I found some indications that device 
memory is implicitly free'd when the host app exists - even if you don't call 
cudaFree(). Can someone confirm this, or is there a risk of rendering the GPU 
device in an unusable state? How about page-locked host memory allocated with 
cudaHostAlloc() or even device memory implicitly allocated when creating CUFFT 
plans...?


Thanks in advance,

Oliver

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to