First thoughts:

Whetstones don't measure peak performance, more like minimum performance.
The benchmark is of very basic funtionality and does not take advantage
of improved architectures, instruction sets, etc. My Pentium MMX host
has a duration correction factor near 1 doing s...@h work, more modern
hosts have fractional DCF of 0.2 to 0.3. Of course there might be a
project which has an application so poorly written or constrained some
other way that Whetstones happen to match even on modern hosts.

The setiathome_enhanced work has about a 5:1 execution time range on CPUs,
maybe 10:1 on CUDA GPUs with a different and much more common worst case
characteristic. The project would need about 3 separate CPU applications
and 4 separate CUDA applications for your new scheme. Work fetch now has
about a +/- 50% uncertainty in the estimated times, the somewhat random
distribution of work helps damp that. Classifying the work for several
applications reduces that helpful random factor, making it necessary to
have more applications.

Given the 100 slots between Feeder and Scheduler, how do you propose
assigning weights such that whatever work is being produced matches the
range of different applications? A different method of characterizing
the slots so that work for any of a group of applications is suitable
for a slot? Or is it possible that shared memory limitation could be
removed by going to memory mapped files as has been done client side on
*nix and Mac OS?

I don't mean to be totally negative, simply pointing out concerns.
-- 
                                                             Joe


On 28 Aug 2009 at 12:45, David wrote:

> I'm coming around to the viewpoint that projects shouldn't be expected
> to supply estimates of job duration or application performance.
> I think it's feasible to maintain these estimates dynamically,
> based on actual job runtimes.
> I've sketched a set of changes that would accomplish this:
> http://boinc.berkeley.edu/trac/wiki/AutoFlops
> Comments welcome.
> 
> BTW, a bonus of the proposed design is that it provides
> a project-independent credit-granting policy.
> 
> -- David
> 
> Richard Haselgrove wrote:
> > ...  if projects
> > are expected to fine-tune performance metrics down to the individual 
> > plan_class level, then I'm sorry, but they just won't. I've had to shout 
> > (loudly and repeatedly) at both AQUA and GPUGrid to get them to adjust 
> > rsc_fpops_est to within an order of magnitude of reality (in AQUA's case, 
> > two orders of magnitude). 
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.


_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to