Hello.

Looks like current quota system implementation can't prevent project resources 
waste in case of "partially" broken host.
For example, host with anonymous platform running FERMI incompatible CUDA app 
on SETI.
It will produce incorrect overflows almost always but few specific ARs that 
will be processed correctly and recive validation.
This small nomber of validations + "GPU" status of app (GPU has greatly relaxed 
limits) allows continuous task trashing. Current quota system implementation 
can't prevent massive task trashing in this situation.

But now more historical info about host behavior stored on servers, on per app 
version basis.
Maybe smth new can be implemented that will take into account not only last 
successive validation but host history too?
The testcases are known, SETI community has list of such bad-behaving hosts 
already: 
http://setiathome.berkeley.edu/forum_thread.php?id=62573&nowrap=true#1061788

The aim should be to reduce their throughput to 1 task per day for NV GPU app 
until their owners reinstall GPU app.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to