Bruce Griffiths said the following on 12/01/2007 03:36 PM: >>> It takes a 100 second average of each PPS source >>> sequentially; I end up with six log files, each with a tau of >>> 600 seconds: CS1-GPS, CS2-GPS, RB1-GPS, CS1-CS2, CS1-RB1, and CS2-RB1.
> This technique creates the issue of how to correct for the effects of > deadtime. I've pondered this question, and I don't think there is a deadtime issue. I log one phase comparison every 600 seconds. It just happens that the one comparison is the average of 100 individual readings taken once per second; I'm not sure how that's any different (in terms of deadtime) than simply logging a single PPS-to-PPS comparison every 600 seconds. The actual time between comparisons is equal to the tau; it's not as though I'm getting one reading every two tau (as might be the case if reading a nominal 1 second phase difference every second). Even in generating the average, there's no dead time if the time interval is small in comparison to the repetition rate -- reading a phase of nominally 10 microseconds once per second leaves plenty of time for the counter to catch its breath. I use the 100 second average for two reasons: first, it results in a somewhat better resolution than the 2ns native of the HP-5334A TIC, which may or may not be meaningful in the ADEV calculation given the PPS noise. Second, it smooths the peak-to-peak noise, particularly from the GPS, and makes for smoother phase plots, particularly when one of the sources is GPS (I realize that 100 seconds isn't long enough to integrated out the GPS noise; when I do my long-term plots, I further reduce the data to tau = 1 hour and even tau = 1 day which is more meaningful). Am I missing something? John _______________________________________________ time-nuts mailing list -- [email protected] To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
