I compared an Austron 1250A to an FTS 1050A, both 5 MHz quartz frequency standards. I beat both against a 5Hz offset using a Riley DMTD device to provide a 1e6 time resolution increase. There was about a 1.15e-10 frequency difference between the two oscillators (two weeks on, it's about 5.6s-11)

The two oscillators measure about 6e-13 ADEV from 8 to 100 seconds, assuming the phase difference at the the ZCDs is between .2 and .11 s.

When the time interval, as measured by the counter, drops below .11s two distinct slopes become apparent on the Timelab "Original Phase Window"; one from .2s to .11s and the other from .11s to 0s. As the phase wraps the cycle repeats.

I have thought about this a fair bit and the only thing that makes much sense is that with small phase differences I get 5 samples per second but as the phase difference lengthens, the TIC can no longer deliver 5 sps. It has to drop to 2.5 samples per second. If I'm not mistaken, I also see a similar, but less pronounced effect using PicTicII's as shown in Riley's article. TimeLab sets the sampling time based on monitoring the initial input from the TIC and I assume a change in sampling rate will affect the slope. Does this make any sense or I am I barking up the wrong tree here?

Anyone using time tagging instead of TICs?  Any serious pitfalls there?

Thanks,
Bob Darby













_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to