On Sep 28, 2009, at 12:58 PM, Lynn W. Taylor wrote:

>
>
> Paul D. Buck wrote:
>> On Sep 28, 2009, at 11:13 AM, Lynn W. Taylor wrote:
>>> The benchmark affects the estimated run time, and the amount of  
>>> work  downloaded.  It affects credit, and credit is "fun" but it's  
>>> not  science.
>> Then you are also guilty of not reading the proposal.  I have  
>> always  said that while running calibration tasks that the same  
>> compensation  would be paid for a calibration task as for any other  
>> task.  In fact,  I said that it could qualify for a bonus to  
>> encourage participation in  the system.  In that we have resistance  
>> as you and John express  because you don't seem interested in any  
>> attempt to improve the  operation of the system as a whole.
>
> I'm not talking about awarding credit, bonus credits, better  
> assignment of work, or anything else along those lines.
>
> When you come back with "I've always said that while running  
> calibration tasks the same compensation...." it shows that you  
> missed my question. I wasn't asking about credit.  You did the same  
> thing in the other thread when I raised a separate issue about  
> continuous downloads and you told me that I had your issue wrong.

Which is part of the problem.  You pick out a small item of the  
proposal and object to that... which is ok in and of itself, but that  
should not be cause for ignoring the rest of the proposal.  When you  
say credit is fun but it is not science you can also take that to mean  
that it is not science because we don't measure it accurately.

In a nutshell:
1) Improved benchmark because real work is used to make the measurement
2) validation of project software because known work will be generated  
to ensure that the software is returning correct answers
3) validation of the hardware suite we are using through running known  
units through and seeing if all machines respond the same

There are more potential gains but that is the basic three.

If we did have a calibrated network the projects would not even have  
to fumble with trying to figure out how many FLOPs were there ... the  
system would determine that automatically.

And I do not see how one can claim that it is all about science and  
then not be concerned with the accuracy of the results.  I have listed  
several ways wherein the results returned could be questionable and  
that too is not seemingly understood.

The example I used in the past is this.  SaH is basically a signal  
hunter.  When was the last time that a test work unit with known  
signals in the input data was subjected to analysis? If anyone who  
reads this board knows this they have not yet answered the question.   
All the testing I know of is to use a task of real data that we assume  
we know what is in it because we have run it through the software.   
And because the answer today matches the answer of yesterday we assume  
that the software is correct.  Unless the software of yesterday was  
bad .. then we are just making today match yesterday's bad analysis.

Which I why to the extent possible I suggested a suite of test signals  
and tasks ... no competent technician tests an item of electronics  
with only one test ...

> BOINC is a black box.  A project dumps work units and a science  
> application into the box, and results pour out.
>
> I'm asking only about the results.  Unless I'm badly mistaken, that  
> was John's question as well.
>
> We can all get excited about how BOINC does (or doesn't) work well,  
> but all the projects care about are results.

BOINC is a black box, true... put stuff in and get stuff out ... GIGO   
Good in, Garbage out...

The point is that the proposal which you insist you have read is about  
doing more than improving the credit reporting and the basis for  
calculating that credit.  It is not just "gold plating" of the  
benchmark but a serious effort for us to put some scientific rigor  
into the instruments that we are using ...

The saddest thing about this whole discussion is that I seem to be the  
only one in the world that wants to improve the rigor with which we do  
the science we as a collective claim to be so concerned about...

I will say it again, it is not about getting results ... it is about  
getting correct results ... and we should be willing to pay the costs  
to get correct results.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to