Nicolás Alvarez wrote:
> El Martes 21 Jul 2009 16:54:01 Martin escribió:
>> My thought is that we must have a semantic shift so that what is
>> usefully utilised is rewarded, and not just *time spent* (perhaps busyly
>> uselessly spinning wheels) on whatever hardware.
> 
> The GPU and CPU apps don't necessarily make the same amount of floating point 
> operations. If someone optimizes one of the two apps so that it can do the 
> same with (slightly?) less calculations, and you grant credits per flops, 
> then GPU and CPU get different credits for doing the exact same task (meaning 
> same input, same output). If that happens, credits aren't really 
> reflecting "work done", in my opinion.

And that is the point and the conflict: What is /actually/ being 'rewarded'?


My view is that we cannot work to the s...@h WU as a "Gold Standard". s...@h 
is a far too specialised and restrictive a measure.

Also, I don't think we should try to work to any ideal of "science done" 
- that is far far too far an ethereal insubstantial idea.



> By the way, credits are already defined proportional to flops: "1/100th day 
> of 
> CPU time on a computer that does both 1000 double-precision MIPS and 1000 
> integer MIPS." In other words, "a 1 GigaFLOP machine, running full time, 
> produces 100 units of credit in 1 day."

And there is the confusion. There is the mish-mash of units, incomplete, 
and specific to just one small part of one particular type of real-world 
architecture (that just happens to be suitable for the dominant FFT 
calculations for s...@h).

And yet Boinc is to be general purpose...



For the sake of brevity and to save wearing out my finger with typing, 
my observations in brief:

1: The present cobblestone scoring assumes very specifically that _all_ 
Boinc projects are dominantly calculation intensive (s...@h FFTs), and that 
all other compute resources are ignored (as worthless?).

2: The present scoring scheme can only work well if all projects can be 
physically 'calibrated' against the s...@h Golden Standard machine.

3: We now have projects other than s...@h, a range of CPU architectures and 
coprocessors, and ever more exotic and parallel hardware, and all 
offering varying performance and optimisations that can span orders of 
magnitude.

4: Projects have WUs of variable 'difficulty' and unknown resource 
consumption.

5: We put new projects in an impossible situation of not knowing at what 
level to award an arbitrary score.



Either 2 or 4 need fixing.

Or abandon the cobblestones?


( Note one aspect in 5 above: /arbitrary/ )

Regards.
Martin


-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to