On 29 Aug 2009 at 0:36, Nicolás wrote:
> El Sáb 29 Ago 2009 00:30:44 Josef W. Segur escribió:
> > First thoughts:
> >
> > Whetstones don't measure peak performance, more like minimum performance.
> > The benchmark is of very basic funtionality and does not take advantage
> > of improved architectures, instruction sets, etc.
>
> It would take advantage of advanced instruction sets if the C++ compiler uses
> them.
>
> What it doesn't do is use significant amounts of memory. It all fits in the
> CPU cache, so it would give *higher* performance than a real app.
True, one could derive something different from the Whetstone benchmark
which used MMX instructions for the 3 out of 8 subtests which are integer.
That would give my Pentium MMX host a higher benchmark. Do that for each
possible set of different instructions, and you'd have something which
might better match performance on real work. But each time a CPU vendor
came out with a new and improved chip you'd have to add another variant.
That way lies madness, or at least endless argument.
Certainly the smallness of the tests are among the other reasons they
shouldn't be used as more than a very rough general indication of a host's
capability. It's good we have duration correction factor to adjust for the
inaccuracies, IMO.
It occurs to me that David's approach might be workable without actual
separate app versions, rather a sub_version indicator. That would be sent
to the client with each task assigned, and the client would keep the
flops_est values separate on that basis. The projects wouldn't have to
have a bunch of app versions, the work would simply have to be classified
at creation as a specific sub_version. The client would ignore sub_version
when linking work to an app version.
--
Joe
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.