El Martes 14 Jul 2009 09:41:10 Carl Christensen escribió:
> with boinc credits the simple rule is there's no pleasing anyone.  I
> actually think that if my GPU does 10 times the work of a CPU that I should
> get 10 times the credit (which I think is what you or the noisier credit
> people are complaining about)?  at the very least, that promotes better
> energy efficiency for a project in addition to faster turnaround times for
> a workunit.

That's correct. If a GPU is 10x faster you should get 10x credits per day, 
which is accomplished by simply giving the same credits per task no matter 
the processor.

But here's scenario #3, following my message from yesterday:

Project A gives 12 credits per task. The task takes four hours on a certain 
CPU, so a user running exclusively that project will get 72cr/day (again, 
this is a fictional situation; let's ignore whether this rate is reasonable 
compared to other current real projects). Other projects give 72cr/day too, 
so "cross-project credit parity" is okay.

The project manages to optimize the app so it takes *three* hours on this CPU. 
And to ensure credits are kept in sync with other projects, they lower 
credits per task to 9, so the user still gets 72cr/day. That is, they choose 
the second option as described in my scenario 2.

So far so good.

Suppose this project, still giving 9 cr/task, makes a GPU app. The task takes 
three hours on the CPU, and the new GPU app takes 1 hour on a certain GPU 
model. Credits per task are the same for both. So a user running fulltime on 
that GPU will get 216cr/day (or more, if he *also* runs the CPU app).

Now, the project finds a way to speed it up further on CPUs. Task goes down to 
2 hours on that CPU, and 1 hour on the GPU. What do they do with credits now?

Option 1: project lowers credits to 6 per task so that, taking two hours per
task, that CPU still gets 72 cr/day. But then GPUs will get 144 cr/day. Why
do GPU users get less than before, if their app is the same? (was 216/day)

Option 2: project leaves credits at 9 per task. GPUs still get 216 cr/day. CPU
now gets 108 cr/day. People say it's overgranting, and it's true, the same CPU
gets close to 72 cr/day in every other project, but 108 cr/day in this one.



Even more: think what would have happened if they had released the GPU app 
before doing the *first* CPU optimization (that is, if they had released the 
1h GPU app while having a 4h CPU app), or if they had released the GPU app 
after doing both CPU optimizations.

Is there any good reason for credits to be different depending on what was 
released first?
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to