Lynn W. Taylor wrote:
> With much trimmed....
> 
> Martin wrote:
>> Slightly long to try to 'wrap-up'. For those in a hurry:
>>
>> If the clear measurement is not seen as important, then we will 
>> continue to have arbitrary drifting credits that drift and noone can 
>> say by how much.
>>
>> I don't think any "autoflops" /can/ fix a moving target.
>>
> 
> (much removed)
> 
>> credits = k * cobblestones
> 
> ... and I believe that K is 1.

"k" can be anything depending on:

what a project does that is different to s...@h;

how well a particular host architecture performs for that project's tasks;

how well compiler optimisations fit for the compiler used;

and whether other tasks also running on that host interfere with the 
observed performance. None of those aspects or other-tha...@h resources 
are accounted for at present.


> Going back to the formula, it says "whetstones and dhrystones" and if 
> they're a moving target, then we'd better be aiming for a moving target.

Even randomly moving? Use impossibly large error bars?


> Poor choice?  Maybe, but if that's the definition, whatever is done has 
> to be true to the formula.

That was a good initial choice from the experience for s...@h, but time and 
projects move on.

Hence for historic consistency, we can keep with the cobblestones and 
the known problems for different projects and different host hardware. 
For simplicity, they can even be used for the first 'guess' for 
scheduling a new system as a form of "BogoMIPS".


We also have the choice to fix the credits with a weighted sum of more 
applicable measures, as is already proposed for some time in future 
development.

Or we can accept that the cobblestones are flawed and need to be 
calibrated on a WU-by-WU basis to include the unmeasured imponderables 
against an example of real hardware so that we maintain a stable 
reference. We get a different "k" for every type of WU with "k" 
contrived to give a consistent performance measure as measured on the 
"Etalon" computer.

For example for s...@h, there is a strong dependence on AR for the amount 
of processing required for a WU. Hence, we try to give the Etalon as 
wide a range of ARs to sample as possible. (The calibration then ripples 
down through the hierarchy of hosts.)

Or we can abandon the credits altogether.


I would expect that all current users will want some credit measure that 
is in some way comparable to what is 'awarded' at present...


Perhaps the best way to prove all this is to program up a test project 
or to program up a simulation and to actually /try it/ ...

Regards,
Martin

-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to