Paul D. Buck wrote:
> 
[... ad-hominem exasperation removed ...]
> 
> Were we to implement my proposal there would be two classes, or more of 
> work.  All would be "real" work in that the test tasks would be just 
> like the real tasks in that they would take as long to process because 
> there would be no difference in the construction of the task.  TO put it 
> another way in the context of SaH, the test task would be the same type 
> input file the only difference would be that the data within would be 
> artificially generated.  In other words a known signal.
> 
[...]
> 
> Lastly, like Redundant computing and Validation of duplicate or 
> triplicate processing this is a cost we should be willing to pay for 
> increased quality of known results.
> 
> As to waste, how much more wasteful than to find out that large batches 
> of results are potentially unusable because of a series of flaws that 
> allowed bad data to pass validation?
> 
> How big is the problem? You may be right and there is no problem.  But 
> just as security by obscurity is not a safe answer, neither is 
> pretending that these potential problems don't exist.
[...]

OK, this is where Paul's apparent wishes (note, should be expressed as 
"ideas",) and my ideas diverge.


Paul is proposing that special "calibration WUs" are to be passed 
through the Boinc system end-to-end for the dual purpose of calibrating 
the performance of the client that processed the WU, and to also act as 
a validation check of the entire Boinc WU data path.


My proposal is that we just do the minimum necessary to calibrate the 
host performance and credits against a known project lab reference 
computer using the normal pool of live WUs. Then, the only 'wasted' 
compute time is that needed to characterise the one (or few) reference 
computer systems in the lab. Everything else is then compared against 
them. The calibration is propagated hierarchically through the 
participants hosts in a similar way to what is done for such as NTP and 
for NIST standards. At least a small level of WU redundancy is required 
so that the calibration can propagate by comparing the hosts that have 
processed the same WU. The coordination for this can be done totally 
server-side (as part of the validator?).


Sorry Paul, the end-to-end 'validation' is something that is likely so 
specific to each project that it is up to the project to test/prove the 
correctness of the Boinc generated results.

For example for the case of s...@h, inject a test signal into the Arecibo 
data recorder? Or note when Arecibo scans across a distant space probe?

Meanwhile, Boinc includes various checks and tests to overcome the 
problem of using untrusted and unreliable hosts.


Keep with the "ideas"!

Regards,
Martin

-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to