Yes this gets measured all the time. But not this way. In the usual case the PPS causes the nanosecond clock to be trapped and logged. The clock has good enough stably for this that it is not a major error source.
If the system were to try and create a PPS then it would have to be based on the nanosecond clock or a hardware count down register based on it. I just cannot see how a count down timer interrupt can have less jitter than a DCD line interrupt. A few people have gotten around this by moving the computer's timing clock off the CPU chip. Then you can use hardware latches and such that have very predictable timing. I think this is the only way to improve the current system. You have t remember that there is a large and active community of researchers who have been working on this for just over 30 years now in the latest development version that are talking about nanosecond level time keeping. The answer to measuring is "a few microseconds" In fact if you need to measure a time interval at that level of accurately all you need is a standard computer with some serial ports. There is an interesting project to monist the AC mains line frequency and the sensor is just an AC transformer connected the the DCD pin of a serial port. On Sat, Oct 26, 2013 at 2:47 PM, Hal Murray <[email protected]> wrote: > > You have to have your time-nut hat on before that makes a difference. > > Has anybody measured it? It should be easy to hack the kernel PPS > interrupt > routine to flap a printer port signal and measure the delay between the PPS > signal and printer port. > > -- Chris Albertson Redondo Beach, California _______________________________________________ time-nuts mailing list -- [email protected] To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
