Re: [time-nuts] ADEV measurement question
Tom, thanks for sharing this! This was exactly what I was looking for. Regards, Matthias Am 23.08.2015 um 19:19 schrieb Tom Van Baak: To learn more, I think the best way would be to put the counter into its fast binary mode and acquire 1k time interval samples per second. That would give me loads of data to play with and it would be easy to try out how different averaging schemes affect the result. Matthias, See: http://leapsecond.com/pages/adev-avg/ for the results of a similar ADEV averaging experiment. I can send you the raw data if you want. What helped me understand the issue was to think in terms of frequency *in*stability instead of frequency stability. We often use the words interchangeably. But imagine that your goal is to measure oscillator noise, its instability, not its stability. With this new mental image the last thing you would do is average. By its very nature, averaging removes highs and lows and smoothes things out. If your goal is to measure instability, averaging removes the very thing you're trying to measure. The plots in the above web page show this dramatically. You can make an oscillator as good as you want if you average enough. /tvb ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there. ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
Re: [time-nuts] ADEV measurement question
> To learn more, I think the best way would be to put the > counter into its fast binary mode and acquire 1k time > interval samples per second. That would give me loads of > data to play with and it would be easy to try out how > different averaging schemes affect the result. Matthias, See: http://leapsecond.com/pages/adev-avg/ for the results of a similar ADEV averaging experiment. I can send you the raw data if you want. What helped me understand the issue was to think in terms of frequency *in*stability instead of frequency stability. We often use the words interchangeably. But imagine that your goal is to measure oscillator noise, its instability, not its stability. With this new mental image the last thing you would do is average. By its very nature, averaging removes highs and lows and smoothes things out. If your goal is to measure instability, averaging removes the very thing you're trying to measure. The plots in the above web page show this dramatically. You can make an oscillator as good as you want if you average enough. /tvb ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
Re: [time-nuts] ADEV measurement question
Matthias, On 08/20/2015 09:55 PM, Matthias Jelen wrote: Dear John & Magnus, thank you very much for your detailed explanation. I realize that the averaging topic is much more complex than I thought - it certainly gives me something to think about :-) I never thought in terms of noise bandwith in this application, thanks for putting me on this track. You are welcome. Your question was a good opportunity to explain. It seems that the simplest and safest way to get meaningfull results is to hook up two mixers and a hand full of opamps and comparators. Which isn't that easy. You will reduce the slew-rate of your signal in the mix-down process, and that converts white noise into white phase noise, which reduces the benefit of the lower frequency achieves. To overcome this you need to gain yourself out of the situation. That works, but it is a bit messy than just tossing in a mixer or two. To learn more, I think the best way would be to put the counter into its fast binary mode and acquire 1k time interval samples per second. That would give me loads of data to play with and it would be easy to try out how different averaging schemes affect the result. This is actually a very good idea. Won't do much for ADEV other than confidence intervals, but is good for MDEV in lowering the front-end noise to be able to measure things. I´ll have to read and think some more :-) Experiment, analyze, test theories etc. is how we all learn. :) Cheers, Magnus Cheers, Matthias Am 19.08.2015 um 21:52 schrieb Magnus Danielson: Dear Mathias, On 08/19/2015 06:40 PM, Matthias Jelen wrote: Hello, I´ve got a question concerning ADEV-measurements. I´m measuring the 15 MHz output of a KS-24361 with my SR-620 with it´s internal (Wenzel) OCXO using Timelab. For the first shot I used the counters frequency mode with 1s gatetime. ADEV at tau=1s turned out to be arounf 2E-11, which fits the 20 ps single shot resolution of the SR-620 nicely. To overcome this limitation without setting up a DMTD system, I used the counter as TIC, feeding 1 kHz (derived from the counter´s reference) to the start channel, the 15 MHz to the stop channel and put the counter into average mode / 1k samples. This gives me one averaged result per second. The idea was that this shouldn´t change the measurement itself, because like in frequency mode with 1s gate time I get the averaged value over one second, but I expected trigger noise etc. to be averaged out to a certain amount. I have to watch out for phase wraps, but as the two frequencys are quite equal, this is not a big issue here. As expected, ADEV at tau=1s got much better, it is now in the 4E-12 area, which sound reasonable. What makes me wonder is the fact that result are significantly better now at longer taus (10..100s) also, despite of the fact that also in frequency mode these result were well aboce the noise floor (2E-12 @ 10s and so on...). So, is it a good idea to use this kind of averaging, or am I overlooking something which turns the numbers better than they really are? I´m pretty sure I am not the first one to try this... I´m looking forward to your comments. OK, averaging or filtering of data before ADEV processing is tricky, as it filters the data. Whenever you do that, you actually convert your measurement from an ADEV measure to something else. If you do proper post-processing, this something else can have known properties and thus we can relate the amplitude of the curve to amplitude of various noise sources, as it will cause biasing from the ADEV properties. The reason you get better results is because the ADEV response on white noise depends on the measurement system bandwidth (see Allan deviation wikipedia page), and by averaging you do reduce the bandwidth. Sometimes when you do this, you loose the gain as you increase the tau, since the dominant frequency will lower and become more and more into the pass-band of the fixed bandwidth filter you created. What you see is that it flattens out to the length of the average before lowering down, as if there was no filtering, so you have only achieved a gain in skewed value for very short taus, but then no gain at all for longer taus, so no real gain. This was realized in 1980-1981 and in 1981 an article was published in which they realized that they can change the bandwidth along-side the change of tau, so that the gain remains. This became the modified Allan deviation (MDEV), and was inspired by the methods for improving frequency measures for lasers as presented by J.J. Snyder in 1980 and 1981. J.J. Snyder was doing what you proposes, averaging of blocks, and then extended this in software, and this became a direct inspiration for the MDEV development, which does a pre-averaging over tau before processing through ADEV, and this combined is the MDEV. Doing TIC averaging and then continue the processing with MDEV processing should produce a proper MDEV curve, unless my tired brain does not miss out on
Re: [time-nuts] ADEV measurement question
Hi Next layer to this onion is that the low(er) frequency signals out of the mixer have slow(er) edges. There has been a lot of discussion on the list in the past about Dual Mixer Time Difference (DMTD) setups. The fast answer is in a paper by a gentleman by the name of Collins. on how to do the limiters in a fashion best optimized to get around the issue. There is some debate on how old the technique actually is, but his approach is pretty good. Bob > On Aug 20, 2015, at 3:55 PM, Matthias Jelen wrote: > > Dear John & Magnus, > > thank you very much for your detailed explanation. I realize that the > averaging topic is much more complex than I thought - it certainly gives me > something to think about :-) I never thought in terms of noise bandwith in > this application, thanks for putting me on this track. > > It seems that the simplest and safest way to get meaningfull results is to > hook up two mixers and a hand full of opamps and comparators. > > To learn more, I think the best way would be to put the counter into its fast > binary mode and acquire 1k time interval samples per second. That would give > me loads of data to play with and it would be easy to try out how different > averaging schemes affect the result. > > I´ll have to read and think some more :-) > > Cheers, > > Matthias > > > Am 19.08.2015 um 21:52 schrieb Magnus Danielson: >> Dear Mathias, >> >> On 08/19/2015 06:40 PM, Matthias Jelen wrote: >>> Hello, >>> >>> I´ve got a question concerning ADEV-measurements. >>> >>> I´m measuring the 15 MHz output of a KS-24361 with my SR-620 with it´s >>> internal (Wenzel) OCXO using Timelab. For the first shot I used the >>> counters frequency mode with 1s gatetime. ADEV at tau=1s turned out to >>> be arounf 2E-11, which fits the 20 ps single shot resolution of the >>> SR-620 nicely. >>> >>> To overcome this limitation without setting up a DMTD system, I used the >>> counter as TIC, feeding 1 kHz (derived from the counter´s reference) to >>> the start channel, the 15 MHz to the stop channel and put the counter >>> into average mode / 1k samples. This gives me one averaged result per >>> second. >>> >>> The idea was that this shouldn´t change the measurement itself, because >>> like in frequency mode with 1s gate time I get the averaged value over >>> one second, but I expected trigger noise etc. to be averaged out to a >>> certain amount. I have to watch out for phase wraps, but as the two >>> frequencys are quite equal, this is not a big issue here. >>> >>> As expected, ADEV at tau=1s got much better, it is now in the 4E-12 >>> area, which sound reasonable. >>> >>> What makes me wonder is the fact that result are significantly better >>> now at longer taus (10..100s) also, despite of the fact that also in >>> frequency mode these result were well aboce the noise floor (2E-12 @ 10s >>> and so on...). >>> >>> So, is it a good idea to use this kind of averaging, or am I overlooking >>> something which turns the numbers better than they really are? I´m >>> pretty sure I am not the first one to try this... >>> >>> I´m looking forward to your comments. >> >> OK, averaging or filtering of data before ADEV processing is tricky, as it >> filters the data. Whenever you do that, you actually convert your >> measurement from an ADEV measure to something else. If you do proper >> post-processing, this something else can have known properties and thus we >> can relate the amplitude of the curve to amplitude of various noise sources, >> as it will cause biasing from the ADEV properties. >> >> The reason you get better results is because the ADEV response on white >> noise depends on the measurement system bandwidth (see Allan deviation >> wikipedia page), and by averaging you do reduce the bandwidth. >> >> Sometimes when you do this, you loose the gain as you increase the tau, >> since the dominant frequency will lower and become more and more into the >> pass-band of the fixed bandwidth filter you created. What you see is that it >> flattens out to the length of the average before lowering down, as if there >> was no filtering, so you have only achieved a gain in skewed value for very >> short taus, but then no gain at all for longer taus, so no real gain. >> >> This was realized in 1980-1981 and in 1981 an article was published in which >> they realized that they can change the bandwidth along-side the change of >> tau, so that the gain remains. This became the modified Allan deviation >> (MDEV), and was inspired by the methods for improving frequency measures for >> lasers as presented by J.J. Snyder in 1980 and 1981. J.J. Snyder was doing >> what you proposes, averaging of blocks, and then extended this in software, >> and this became a direct inspiration for the MDEV development, which does a >> pre-averaging over tau before processing through ADEV, and this combined is >> the MDEV. >> >> Doing TIC averaging and then continue the processing with MDEV
Re: [time-nuts] ADEV measurement question
Dear John & Magnus, thank you very much for your detailed explanation. I realize that the averaging topic is much more complex than I thought - it certainly gives me something to think about :-) I never thought in terms of noise bandwith in this application, thanks for putting me on this track. It seems that the simplest and safest way to get meaningfull results is to hook up two mixers and a hand full of opamps and comparators. To learn more, I think the best way would be to put the counter into its fast binary mode and acquire 1k time interval samples per second. That would give me loads of data to play with and it would be easy to try out how different averaging schemes affect the result. I´ll have to read and think some more :-) Cheers, Matthias Am 19.08.2015 um 21:52 schrieb Magnus Danielson: Dear Mathias, On 08/19/2015 06:40 PM, Matthias Jelen wrote: Hello, I´ve got a question concerning ADEV-measurements. I´m measuring the 15 MHz output of a KS-24361 with my SR-620 with it´s internal (Wenzel) OCXO using Timelab. For the first shot I used the counters frequency mode with 1s gatetime. ADEV at tau=1s turned out to be arounf 2E-11, which fits the 20 ps single shot resolution of the SR-620 nicely. To overcome this limitation without setting up a DMTD system, I used the counter as TIC, feeding 1 kHz (derived from the counter´s reference) to the start channel, the 15 MHz to the stop channel and put the counter into average mode / 1k samples. This gives me one averaged result per second. The idea was that this shouldn´t change the measurement itself, because like in frequency mode with 1s gate time I get the averaged value over one second, but I expected trigger noise etc. to be averaged out to a certain amount. I have to watch out for phase wraps, but as the two frequencys are quite equal, this is not a big issue here. As expected, ADEV at tau=1s got much better, it is now in the 4E-12 area, which sound reasonable. What makes me wonder is the fact that result are significantly better now at longer taus (10..100s) also, despite of the fact that also in frequency mode these result were well aboce the noise floor (2E-12 @ 10s and so on...). So, is it a good idea to use this kind of averaging, or am I overlooking something which turns the numbers better than they really are? I´m pretty sure I am not the first one to try this... I´m looking forward to your comments. OK, averaging or filtering of data before ADEV processing is tricky, as it filters the data. Whenever you do that, you actually convert your measurement from an ADEV measure to something else. If you do proper post-processing, this something else can have known properties and thus we can relate the amplitude of the curve to amplitude of various noise sources, as it will cause biasing from the ADEV properties. The reason you get better results is because the ADEV response on white noise depends on the measurement system bandwidth (see Allan deviation wikipedia page), and by averaging you do reduce the bandwidth. Sometimes when you do this, you loose the gain as you increase the tau, since the dominant frequency will lower and become more and more into the pass-band of the fixed bandwidth filter you created. What you see is that it flattens out to the length of the average before lowering down, as if there was no filtering, so you have only achieved a gain in skewed value for very short taus, but then no gain at all for longer taus, so no real gain. This was realized in 1980-1981 and in 1981 an article was published in which they realized that they can change the bandwidth along-side the change of tau, so that the gain remains. This became the modified Allan deviation (MDEV), and was inspired by the methods for improving frequency measures for lasers as presented by J.J. Snyder in 1980 and 1981. J.J. Snyder was doing what you proposes, averaging of blocks, and then extended this in software, and this became a direct inspiration for the MDEV development, which does a pre-averaging over tau before processing through ADEV, and this combined is the MDEV. Doing TIC averaging and then continue the processing with MDEV processing should produce a proper MDEV curve, unless my tired brain does not miss out on details. If you then analyze it as a MDEV (rather than ADEV) then you use the values properly. MDEV have the benefit that white phase noise drops by tau^-1.5 rather than the ADEV tau^-1, and starting with the SR-620 means you for fairly low taus hit actual measurement noise. The averaging makes this trip from tau0 of 1 ms in your setup. So, you can go down this route, but you need to be careful to ensure that you have done the processing correctly enough that you get the results that can be interpreted properly. Oh, as you average, phase-unwrapping becomes "interesting". :) Cheers, Magnus ___ time-nuts
Re: [time-nuts] ADEV measurement question
Dear Mathias, On 08/19/2015 06:40 PM, Matthias Jelen wrote: Hello, I´ve got a question concerning ADEV-measurements. I´m measuring the 15 MHz output of a KS-24361 with my SR-620 with it´s internal (Wenzel) OCXO using Timelab. For the first shot I used the counters frequency mode with 1s gatetime. ADEV at tau=1s turned out to be arounf 2E-11, which fits the 20 ps single shot resolution of the SR-620 nicely. To overcome this limitation without setting up a DMTD system, I used the counter as TIC, feeding 1 kHz (derived from the counter´s reference) to the start channel, the 15 MHz to the stop channel and put the counter into average mode / 1k samples. This gives me one averaged result per second. The idea was that this shouldn´t change the measurement itself, because like in frequency mode with 1s gate time I get the averaged value over one second, but I expected trigger noise etc. to be averaged out to a certain amount. I have to watch out for phase wraps, but as the two frequencys are quite equal, this is not a big issue here. As expected, ADEV at tau=1s got much better, it is now in the 4E-12 area, which sound reasonable. What makes me wonder is the fact that result are significantly better now at longer taus (10..100s) also, despite of the fact that also in frequency mode these result were well aboce the noise floor (2E-12 @ 10s and so on...). So, is it a good idea to use this kind of averaging, or am I overlooking something which turns the numbers better than they really are? I´m pretty sure I am not the first one to try this... I´m looking forward to your comments. OK, averaging or filtering of data before ADEV processing is tricky, as it filters the data. Whenever you do that, you actually convert your measurement from an ADEV measure to something else. If you do proper post-processing, this something else can have known properties and thus we can relate the amplitude of the curve to amplitude of various noise sources, as it will cause biasing from the ADEV properties. The reason you get better results is because the ADEV response on white noise depends on the measurement system bandwidth (see Allan deviation wikipedia page), and by averaging you do reduce the bandwidth. Sometimes when you do this, you loose the gain as you increase the tau, since the dominant frequency will lower and become more and more into the pass-band of the fixed bandwidth filter you created. What you see is that it flattens out to the length of the average before lowering down, as if there was no filtering, so you have only achieved a gain in skewed value for very short taus, but then no gain at all for longer taus, so no real gain. This was realized in 1980-1981 and in 1981 an article was published in which they realized that they can change the bandwidth along-side the change of tau, so that the gain remains. This became the modified Allan deviation (MDEV), and was inspired by the methods for improving frequency measures for lasers as presented by J.J. Snyder in 1980 and 1981. J.J. Snyder was doing what you proposes, averaging of blocks, and then extended this in software, and this became a direct inspiration for the MDEV development, which does a pre-averaging over tau before processing through ADEV, and this combined is the MDEV. Doing TIC averaging and then continue the processing with MDEV processing should produce a proper MDEV curve, unless my tired brain does not miss out on details. If you then analyze it as a MDEV (rather than ADEV) then you use the values properly. MDEV have the benefit that white phase noise drops by tau^-1.5 rather than the ADEV tau^-1, and starting with the SR-620 means you for fairly low taus hit actual measurement noise. The averaging makes this trip from tau0 of 1 ms in your setup. So, you can go down this route, but you need to be careful to ensure that you have done the processing correctly enough that you get the results that can be interpreted properly. Oh, as you average, phase-unwrapping becomes "interesting". :) Cheers, Magnus ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
Re: [time-nuts] ADEV measurement question
I think the simplest way to explain the evils of TI averaging is that white noise doesn't alias in a conventional sense. If a value is perfectly random, then it doesn't matter how you sample it. Your sampling bandwidth -- and nothing else -- determines how much energy you get. You can legitimately change that bandwidth after the fact by resampling or averaging the data. (Another way to say this is that components of the white noise spectrum above the Nyquist frequency don't have a net effect on the observed spectrum.) But this is only true for white noise; with any other variety of signal or noise energy, once it's sampled, it's there for good. So, postprocessing a sampled stream with averaging has the effect of reducing the white-noise component of the data to a greater extent than the rest. This is why you see tend to see 'better' results from an MDEV plot than an ADEV plot. MDEV has some built-in averaging properties, while ADEV does not. Energy in a given part of the spectrum will influence an ADEV plot to a greater extent than with MDEV. ADEV isn't particularly frequency-selective. If you've ever seen 1-pps or 50/60 Hz interference in an ADEV plot, you've noticed that its influence can corrupt the plot for the next decade or two. So if you must use your counter's averaging feature to get good ADEV plots, it's important to carry out that averaging at a very small fraction of the tau-zero interval. The more 1/f^n noise is present in your measurement, the more important that becomes. Otherwise you end up with a transfer function that isn't really ADEV, even though that's what the label on the plot says it is. This gets more complicated with counters that dither their sampling clock, and/or apply other filter functions in their averaging process. Some appear to be give better ADEV fidelity than others. If nothing else, I would expect that averaging at sub-t0 intervals would introduce a dead-time effect in the portion of t0 over which the data is not contributing to the average. It is always going to be better to avoid TI averaging whenever possible, and use great care in interpreting the data. -- john, KE5FX Miles Design LLC > -Original Message- > From: time-nuts [mailto:time-nuts-boun...@febo.com] On Behalf Of Matthias > Jelen > Sent: Wednesday, August 19, 2015 9:40 AM > To: Discussion of precise time and frequency measurement > Subject: [time-nuts] ADEV measurement question > > Hello, > > I´ve got a question concerning ADEV-measurements. > > I´m measuring the 15 MHz output of a KS-24361 with my SR-620 > with it´s internal (Wenzel) OCXO using Timelab. For the > first shot I used the counters frequency mode with 1s > gatetime. ADEV at tau=1s turned out to be arounf 2E-11, > which fits the 20 ps single shot resolution of the SR-620 > nicely. > > To overcome this limitation without setting up a DMTD > system, I used the counter as TIC, feeding 1 kHz (derived > from the counter´s reference) to the start channel, the 15 > MHz to the stop channel and put the counter into average > mode / 1k samples. This gives me one averaged result per second. > > The idea was that this shouldn´t change the measurement > itself, because like in frequency mode with 1s gate time I > get the averaged value over one second, but I expected > trigger noise etc. to be averaged out to a certain amount. I > have to watch out for phase wraps, but as the two frequencys > are quite equal, this is not a big issue here. > > As expected, ADEV at tau=1s got much better, it is now in > the 4E-12 area, which sound reasonable. > > What makes me wonder is the fact that result are > significantly better now at longer taus (10..100s) also, > despite of the fact that also in frequency mode these result > were well aboce the noise floor (2E-12 @ 10s and so on...). > > So, is it a good idea to use this kind of averaging, or am I > overlooking something which turns the numbers better than > they really are? I´m pretty sure I am not the first one to > try this... > > I´m looking forward to your comments. > > Best regards, > > Matthias > ___ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time- > nuts > and follow the instructions there. ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
[time-nuts] ADEV measurement question
Hello, I´ve got a question concerning ADEV-measurements. I´m measuring the 15 MHz output of a KS-24361 with my SR-620 with it´s internal (Wenzel) OCXO using Timelab. For the first shot I used the counters frequency mode with 1s gatetime. ADEV at tau=1s turned out to be arounf 2E-11, which fits the 20 ps single shot resolution of the SR-620 nicely. To overcome this limitation without setting up a DMTD system, I used the counter as TIC, feeding 1 kHz (derived from the counter´s reference) to the start channel, the 15 MHz to the stop channel and put the counter into average mode / 1k samples. This gives me one averaged result per second. The idea was that this shouldn´t change the measurement itself, because like in frequency mode with 1s gate time I get the averaged value over one second, but I expected trigger noise etc. to be averaged out to a certain amount. I have to watch out for phase wraps, but as the two frequencys are quite equal, this is not a big issue here. As expected, ADEV at tau=1s got much better, it is now in the 4E-12 area, which sound reasonable. What makes me wonder is the fact that result are significantly better now at longer taus (10..100s) also, despite of the fact that also in frequency mode these result were well aboce the noise floor (2E-12 @ 10s and so on...). So, is it a good idea to use this kind of averaging, or am I overlooking something which turns the numbers better than they really are? I´m pretty sure I am not the first one to try this... I´m looking forward to your comments. Best regards, Matthias ___ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.