>> Every couple of years, you could set up a new reference machine >> running >> parallel with the old reference machine. These two machines could be >> dialed in so that the credit on some reference tasks was made to be >> identical. Then the new reference machine can run solo. Since this >> machine would not have to be an extremely high powered server, it >> would be >> easier to get it donated. > > If we instead had a calibrated system as I have suggested, there is no > need for reference machines at all ... the new work would be issued > and the network would establish the parameters.
You propose to replace one machine per few years overhead (of we calibrate by real tasks even this is not overhead) to all machines per week overhead. For what reason? It will bring literally zero increase in our faith in result validity above we have right now. Prove of validator, single program, running on single or few servers and accepting data, could and should be made in lab, absolutely no need to load this task on participants PCs. And calibration of participants PCs themselves will bring NO ADDITIONAL level of secuirity just because random errors are still possible. You can't replace redundancy with such calibration. So calibration will go together with redundancy bringing no new security and being pure overhead. For what reason ??? _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
