On Sep 25, 2009, at 8:34 AM, Lynn W. Taylor wrote:

> Even I can figure out that returning work late would have a negative  
> effect on credit.

And I never suggested that it would not.  Variety in the queue and in  
the work selected to be run on the processor has nothing to do with  
deadlines.  It has to do with Resource Share a bad allocation system  
that has no meaning to a person that is single project oriented and we  
are now throwing under the bus in a variety of ways and pretending we  
aren't.  Before I got sidetracked I was on a three month data  
collection effort that probably would have proved that ... alas ... it  
will have to wait until the beginning of next year ...  Because I am  
not convinced, watching BOINC like I do that it maintains RS over the  
long term either ... in part because of debt caps, bugs, and other  
issues including even project outages.  You cannot have it both ways ...

> You seem to be arguing that variety is more important than deadlines.
No, but on almost all my machines variety does play a big part in the  
efficiency and speed of processing.  I made a note in my other post  
about HT, but if you are not aware of the subtleties (forgive me if  
you are) the nub is this ... an HT processor in sharing processing  
across the core gives the appearance of two processors for the price  
of one.  That means that the system would be far "faster" in  
throughput if I mix running a project with an Integer heavy  
instruction mix with a FP heavy mix because they would not get in each  
other's way as much...

As such, more effective processing.  John's standard for "goodness" in  
the argument against my proposal for calibration, yet here he  
champions against efficiency as being Ok ...

> Paul, I know you're mad because you're sure that your benchmark idea  
> is so obviously correct that you can't imagine why it would not be  
> instantly adopted.

I'm autistic... I don't get mad about things like that ...  
hypocrisy ... yeah ... that bugs me, though it does not make me  
mad ... John alternatively picks an argument against a change to  
improve efficiency of BOINC citing a project that does not exist and  
as I pointed out would violate other constraints causing massive loss  
of computing effort to this argument here where he is dead set against  
a proposal citing loss of efficiency.  But it is how you define  
efficiency ... by piles of results or by piles of accurate results ...  
or results that you can have more confidence in ... I don't mind the  
argument used, he just cannot have it both ways ... either he is  
against inefficiency or he isn't ... he cannot be having it both  
ways ...

> In reality, for many projects the "benchmark" would change based on  
> what standard work unit was chosen.  Just look at all of the  
> discussion about VLAR and VHAR work.  It's especially bad for some  
> angle ranges and CUDA cards.

Yes it would.  But here you are using the argument that if we cannot  
have it perfect there is no point to making it better ... in fact if  
you actually bother to read my long-winded proposals and the  
discussions wherein I defended them lo these many years you would find  
that I actually suggested that we go back to fundamentals in SaH and  
in other projects where we could and create a suite that starts with  
tasks that have known characteristics, one pulse, one double, one  
tripplet, one Gaussian, etc. and then run those ... we would then not  
only be measuring the system we would be proving our software.... then  
you add noise and test again ... once more proving the software ...  
then you use actual data ... which we would not have to use computer  
software to search if the contents were clear ...

But we have not done that ... Einstein has their "test Signal" and  
that is better than some things, but, once again I think we are taking  
a lot of faith that our software is really doing what we think it is  
doing ... having written software and chased subtle bugs I know just  
how easy it is to miss something fundamental.  Again, I am surprised  
at the resistance to proving the validity of the software ... why the  
fear?  That is the foremost question in my mind ... if the software is  
as good as "they" think it is ... why the reluctance to test it and  
prove it?

> Benchmarks have been a source of argument since the first benchmarks  
> were designed.  Changing the benchmark just makes trivial changes to  
> the argument.

Yes, and I have a fat file folder full of them and the discussions  
about them going back to 1975 timeframe ... but you are wrong about  
one thing.  There is consensus about one benchmark ... the actual  
workload.  That is an acurate benchmark.  Which is exactly what John  
wants to avoid.  I have not peeked in awhile my my recollection is  
that most of the systems he has now on BOINC are not as slow as he  
claims.  One task per week?

> For your benchmark idea to have any merit at all, it has to work  
> with _any_ work unit, without regard to relative efficiency.

No, a very specific test suite.  If you look at my idea it is to not  
only characterize the systems in terms of speed but to validate them  
in terms of stability and accuracy of returned results.  And though  
John is desperate (it seems to me) to portray the concept as 90%  
testing and 10% work, had he bothered to read the proposal (instead of  
skimming it) he too might have learned that I do have an interest in  
productivity and efficiency that is more rabid than his because I have  
been consistent in my arguments for less load on the clients, less  
code, and in some cases fewer features that are counter productive.

At any rate, you actually have to read my original proposal and the  
"legislative history" in the discussions of the same to truly know  
what I intended.  the proposal is not "just" about benchmarking,  
though some want to see it that way ... it is about a lot of  
things ... benchmarking, testing our software, testing the suite of  
hardware, getting efficiency metrics, and on and on and on ...

Sadly you actually have to read what I write, not skim it to find a  
talking point you can litigate against and be intellectually  
honest ... and yeah ... failure to be that also annoys me ... you  
cannot champion argument "A" in one discussion and assert it is  
irrelevant in all others to suit your convenience ...

If you really want to know and understand I can and do talk to people  
on the phone, or Skype ... sometimes you can get far faster there then  
with this typing nonsense ...
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to