Actually, the compute resources used is proportional to wall time used *
fraction of resource used by other programs * the speed of the resource.
This is true even if the resource is used inefficiently (aka almere grid -
which seems to spend much of its time loading and unloading applications).
The current mechanism for determining credit granted (not that all projects
are using it yet) gives a benefit to those that adopt an optimized
application early, and penalizes those that adopt it late. Once everyone
has adopted the optimized application the bonus disappears. There is a
benefit to the project for deploying a better optimized application in that
more work will get done.
jm7
Martin
<[email protected]
o.uk> To
Sent by: <[email protected]>
<boinc_dev-bounce cc
[email protected]
u> Subject
[boinc_dev] What are credits for?
And optimised apps?
10/28/2010 08:17
AM
Just some brief thoughts:
(Sorry if I'm stating the obvious, I'm just trying to be clear about
assumptions.)
My understanding for the credits are that they are primarily to
encourage user participation by showing a 'reward' and to offer a means
for competition to encourage further participation;
Whether intended or not, the credits are also used as a performance
measure for Boinc as a whole, for individual projects, and by users;
The units used for the Boinc credits can be arbitrary. To avoid user
frustrations, the credits need to be consistent, non-inflationary, and
above all seen to be "fair".
So...
How do optimised applications fit in with the credits scheme?
If a project task can now be done in less time by an optimised
application, compared to previously for the same task, is the same
credit or less credit (due to less resource time used) to be awarded for
that one task?
How does that work for the credits for the same task run on different
hardware, where one set of hardware is much more efficient/faster than
the other for that task?
For cases where validation is allowed for a task run across different
hardware, I personally feel that the credits calculations should
reliably come close the same number for any hardware for that one
task... Or should that not be the case?
Personal thoughts:
I once favoured the idea (Eric's?) of scheduling using client computed
RAC so that the resource shares would share the credits in the way the
users had set by the resource share. However, a big assumption to allow
that to work well is that everything is calibrated so that:
credits are directly proportional to compute resources used
However, is not the "compute resource used" actually:
k * "time the resource is utilised"
where "k" is dependant on each individual project application and the
efficiency of the resource utilisation for that individual application
(and even for the dataset being processed)? Hence, "k" is NOT a constant!
Hence, to include "k" in the scheduling gives the scheduler an ever
moving target...
Regards,
Martin
--
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.