Lynn W. Taylor wrote:
> Paul D. Buck wrote:
> 
>> The saddest thing about this whole discussion is that I seem to be the 
>> only one in the world that wants to improve the rigor with which we do 
>> the science we as a collective claim to be so concerned about...
>>
>> I will say it again, it is not about getting results ... it is about 
>> getting correct results ... and we should be willing to pay the costs to 
>> get correct results.
[...]
> It is up to each project to provide their own science application, and 
> to determine how they can best validate the results.
> 
> BOINC can only provide tools, they can't (and shouldn't) make their use 
> compulsory.

Crossed lines on the communication there? Multiple viewpoint versions of 
"correct"?...


Both points are correct in that:

Boinc provides a framework for running science applications on untrusted 
and unreliable hosts. That framework should include tools to ensure that 
the results returned are the "correct" results from having been 
correctly run.

Whether or not the results are useful or "correct" for whatever science 
is being explored for the project itself is indeed up to the people 
running the project!


There is also a third "correct" in whether for the runtime 
statistics/credits, the credits score can be considered to be correct or 
just arbitrary random numbers that vaguely look to be in proportion to 
CPU runtime as compared to a s...@h WU...

Happy crunchin',
Martin

-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to