Credit is a difficult thing, different people expect different things from
credit and from how it is granted.
I suggest we focus on discussing and possibly develop a way to fix runtime estimation / work assignment problems, and maybe re-think credit granting
later.
Best,
Bernd
On 10.06.14 20:34, David Anderson wrote:
For credit purposes, the standard is peak FLOPS,
i.e. we give credit for what the device could do,
rather than what it actually did.
Among other things, this encourages projects to develop more efficient apps.
Currently we're not measuring this well for x86 CPUs,
since our Whetstone benchmark isn't optimized.
Ideally the BOINC client should include variants for the most common
CPU features, as we do for ARM.
-- D
On 10-Jun-2014 2:09 AM, Richard Haselgrove wrote:
Before anybody leaps into making any changes on the basis of that observation, I
think we ought to pause and consider why we have a benchmark, and what we use
it for.
I'd suggest that in an ideal world, we would be measuring the actual running
speed
of (each project's) science applications on that particular host, optimisations
and
all. We gradually do this through the runtime averages anyway, but it's hard to
gather a priori data on a new host.
Instead of (initially) measuring science application performance, we measure
hardware performance as a surrogate. We now have (at least) three ways of doing
that:
x86: minimum, most conservative, estimate, no optimisations allowed for.
Android: allows for optimised hardware pathways with vfp or neon, but doesn't
relate
back to science app capability.
GPU: maximum theoretical 'peak flops', calculated from card parameters, then
scaled
back by rule of thumb.
Maybe we should standardise on just one standard?
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.