Look at what NTP does to select "good" clocks when it has many to choose from. It does not simply average them.
It looks at the noise in each one and then sees which clocks have overlapping error bars. It assumes that all good clocks have the same time within limits of their precision. Then from the good clocks there is a second level weeding out process then finally it does a weighted average of the remainders where I think those with less jitter get more weight. It would not be impossible to do this with 10MHz oscillators. Certainly a simple average is not a good idea as a broken unit can pull the entire average way down. I think you'd have to check reasonableness first and eliminate outliers I think today you might simply digitize the signals and figure out which were best using software. In short the output is "ensemble time" (not "average time") but there is a careful selection of who is allowed to be member of the ensemble. I used a joke last week to explain to a class why we don't use averages, with no other qualifications. The joke is "Bill Gates walks into a bar.... What's the average net worth of everyone in the bar? Maybe $250 million." My point was that it is hard to describe a population that is not Gaussian distributed. "Stuck" and jumping crystals are not Gaussian. You'd have to detect the misbehaving devices. David Allan had > this interesting concept to the effect that if > you had a sufficient number of wristwatches > (maybe 1000) and you averaged them together > you could somehow get a quality clock, or at > least 31.6 times better. Kind of like the > notion of 1000 monkeys with 1000 typewriters... > -- Chris Albertson Redondo Beach, California _______________________________________________ time-nuts mailing list -- [email protected] To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
