Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Magnus Danielson
Hi,

On 02/26/2018 08:42 PM, Gerhard Hoffmann wrote:
> Am 26.02.2018 um 20:20 schrieb Tom Van Baak:
>> Fun fact -- there's a wide spur at ~2 Hz on the 5065A phase noise
>> plot. What do you think that is? On a hunch I opened the front panel
>> and reset the blinking amber battery alarm lamp, and voila, that noise
>> went away. Makes sense when you think of the power variations
>> associated with a blinking incandescent lamp.
> 
> There was a Tektronix sampler that had a few ps sampling jitter to the tune
> of a blinking LED on the mainframe  :-)

I was just about to comment on that. The Tek 11803 / CSA803C TDR module
has a blinking LED with "HOT" TDR pulse. It however skews the timing so
a later firmware disabled the blinking.

Sadly to say, I don't have that FW on mine CSAs

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Chris Caudle
On Mon, February 26, 2018 2:29 pm, Tom Van Baak wrote:
> BTW, a trick for blinking LED's -- use two of them out of phase: one that
> the user sees on the front panel and one that is blacked out or hidden
> inside. A flip-flop (Q and /Q) or even a set of inverters is all you need.
> The current draw thus remains constant in spite of the blinking.

You could also use a long-tail diff-pair, with the LED in the collector
circuit of just one side of the pair.  Only costs one extra transistor and
a few resistors and then the current draw is (close to) constant whether
the LED is on or off.

-- 
Chris Caudle


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Tom Van Baak
> Yet another reason to nuke the battery and the A2 board. 

Yes. Then again, the effect is very minor when you look at the PN and ADEV 
plots. It falls into the category of "look how sensitive a TimePod is" more 
than "look how bad a 5065A is". And remember it's just a warning lamp, with a 
toggle switch to reset the blink.

> a PIC-DIV pull compared to the 1 pps section of one of these old beasts (Cs 
> or Rb)? 

Right, a PIC divider is much lower power, but if you're driving a 50R load 
that's still a lot of current. I suspect this is one reason why high-end 
standards use 10 or 20 us wide pulses and not 50% duty cycle square waves for 
their 1PPS outputs. Same power but 10 us is 50,000x less energy than 0.5 s. 
I've stopped using squares waves for 1PPS around here.

BTW, a trick for blinking LED's -- use two of them out of phase: one that the 
user sees on the front panel and one that is blacked out or hidden inside. A 
flip-flop (Q and /Q) or even a set of inverters is all you need. The current 
draw thus remains constant in spite of the blinking.

/tvb

- Original Message - 
From: "Bob kb8tq" <kb...@n1k.org>
To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com>
Sent: Monday, February 26, 2018 11:51 AM
Subject: Re: [time-nuts] Allan variance by sine-wave fitting


Hi

Yet another reason to nuke the battery and the A2 board. 

It is amazing just how small a signal can mess things up at the levels involved 
in
a good frequency standard. The old “when in doubt, throw it out” mantra may be
a good one to keep in mind relative to a lot of add on features…. how much does
a PIC-DIV pull compared to the 1 pps section of one of these old beasts (Cs or 
Rb)? 

Lots to think about. 

Bob

> On Feb 26, 2018, at 2:20 PM, Tom Van Baak <t...@leapsecond.com> wrote:
> 
>> at telling you it was sitting on top of a power transformer. It didn’t 
>> matter a lot
> 
> I did ten runs of various standards that were within a couple of meters of 
> the bench; the TimePod did not move the entire time. Each standard had a 
> different looking PN plot, so I'm pretty sure the 120 Hz spur we see is the 
> 5065A itself, not something in the lab.
> 
> File http://leapsecond.com/tmp/2018b-Ralph-2-pn.png is attached.
> 
> Fun fact -- there's a wide spur at ~2 Hz on the 5065A phase noise plot. What 
> do you think that is? On a hunch I opened the front panel and reset the 
> blinking amber battery alarm lamp, and voila, that noise went away. Makes 
> sense when you think of the power variations associated with a blinking 
> incandescent lamp.
> 
> /tvb
> 
> - Original Message - 
> From: "Bob kb8tq" <kb...@n1k.org>
> To: "Tom Van Baak" <t...@leapsecond.com>; "Discussion of precise time and 
> frequency measurement" <time-nuts@febo.com>
> Sent: Monday, February 26, 2018 7:00 AM
> Subject: Re: [time-nuts] Allan variance by sine-wave fitting
> 
> 
> Hi
> 
> One of the TimePods that I had access to in the past was particularly good
> at telling you it was sitting on top of a power transformer. It didn’t matter 
> a lot
> which instrument the power transformer was in. For some weird reason it
> was a good magnetometer at line frequencies. I never bothered to send it
> back for analysis. Simply moving it onto the bench top (rather than stacked 
> on top of this or that) would take care of the issue.
> 
> As far as I could tell, it was just the one unit that had the issue. None of 
> the
> others in the fleet of TimePods seemed to behave this way. Given that they
> normally are very good at rejecting all sorts of crud and ground loops, it 
> was 
> somewhat odd to see. 
> 
> Bob
> 
>> On Feb 26, 2018, at 7:13 AM, Tom Van Baak <t...@leapsecond.com> wrote:
>> 
>>> BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?
>> 
>> The ADEV wiggles aren't visible with normal tau 1 s measurements. But since 
>> the TimePod can go down to tau 1 ms, when I first measure a standard I like 
>> to run at that resolution so effects like this show up. Once that's done, 1 
>> ms resolution is overkill.
>> 
>> In this case it appears to be power supply noise. Attached are the ADEV, PN, 
>> and TDEV plots.
>> 
>> The spur at 120 Hz is massive; there's also a bit at 240 Hz; almost nothing 
>> at 60 Hz. When integrated these cause the bumps you see in the ADEV plot. 
>> It's best seen as a bump at ~4 ms in the TDEV plot.
>> 
>> Note the cute little spur at 137 Hz. Not sure what causes the one at ~3630 
>> Hz.
>> 
>> /tvb
>> <5065a-adev.png><5065a-pn.png><5065a-tdev.png>
> 
> <2018b-Ra

Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Bob kb8tq
Hi

Yet another reason to nuke the battery and the A2 board. 

It is amazing just how small a signal can mess things up at the levels involved 
in
a good frequency standard. The old “when in doubt, throw it out” mantra may be
a good one to keep in mind relative to a lot of add on features…. how much does
a PIC-DIV pull compared to the 1 pps section of one of these old beasts (Cs or 
Rb)? 

Lots to think about. 

Bob

> On Feb 26, 2018, at 2:20 PM, Tom Van Baak <t...@leapsecond.com> wrote:
> 
>> at telling you it was sitting on top of a power transformer. It didn’t 
>> matter a lot
> 
> I did ten runs of various standards that were within a couple of meters of 
> the bench; the TimePod did not move the entire time. Each standard had a 
> different looking PN plot, so I'm pretty sure the 120 Hz spur we see is the 
> 5065A itself, not something in the lab.
> 
> File http://leapsecond.com/tmp/2018b-Ralph-2-pn.png is attached.
> 
> Fun fact -- there's a wide spur at ~2 Hz on the 5065A phase noise plot. What 
> do you think that is? On a hunch I opened the front panel and reset the 
> blinking amber battery alarm lamp, and voila, that noise went away. Makes 
> sense when you think of the power variations associated with a blinking 
> incandescent lamp.
> 
> /tvb
> 
> - Original Message - 
> From: "Bob kb8tq" <kb...@n1k.org>
> To: "Tom Van Baak" <t...@leapsecond.com>; "Discussion of precise time and 
> frequency measurement" <time-nuts@febo.com>
> Sent: Monday, February 26, 2018 7:00 AM
> Subject: Re: [time-nuts] Allan variance by sine-wave fitting
> 
> 
> Hi
> 
> One of the TimePods that I had access to in the past was particularly good
> at telling you it was sitting on top of a power transformer. It didn’t matter 
> a lot
> which instrument the power transformer was in. For some weird reason it
> was a good magnetometer at line frequencies. I never bothered to send it
> back for analysis. Simply moving it onto the bench top (rather than stacked 
> on top of this or that) would take care of the issue.
> 
> As far as I could tell, it was just the one unit that had the issue. None of 
> the
> others in the fleet of TimePods seemed to behave this way. Given that they
> normally are very good at rejecting all sorts of crud and ground loops, it 
> was 
> somewhat odd to see. 
> 
> Bob
> 
>> On Feb 26, 2018, at 7:13 AM, Tom Van Baak <t...@leapsecond.com> wrote:
>> 
>>> BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?
>> 
>> The ADEV wiggles aren't visible with normal tau 1 s measurements. But since 
>> the TimePod can go down to tau 1 ms, when I first measure a standard I like 
>> to run at that resolution so effects like this show up. Once that's done, 1 
>> ms resolution is overkill.
>> 
>> In this case it appears to be power supply noise. Attached are the ADEV, PN, 
>> and TDEV plots.
>> 
>> The spur at 120 Hz is massive; there's also a bit at 240 Hz; almost nothing 
>> at 60 Hz. When integrated these cause the bumps you see in the ADEV plot. 
>> It's best seen as a bump at ~4 ms in the TDEV plot.
>> 
>> Note the cute little spur at 137 Hz. Not sure what causes the one at ~3630 
>> Hz.
>> 
>> /tvb
>> <5065a-adev.png><5065a-pn.png><5065a-tdev.png>
> 
> <2018b-Ralph-2-pn.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Gerhard Hoffmann

Am 26.02.2018 um 20:20 schrieb Tom Van Baak:

Fun fact -- there's a wide spur at ~2 Hz on the 5065A phase noise plot. What do 
you think that is? On a hunch I opened the front panel and reset the blinking 
amber battery alarm lamp, and voila, that noise went away. Makes sense when you 
think of the power variations associated with a blinking incandescent lamp.


There was a Tektronix sampler that had a few ps sampling jitter to the tune
of a blinking LED on the mainframe  :-)

Cheers, Gerhard
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Tom Van Baak
> at telling you it was sitting on top of a power transformer. It didn’t matter 
> a lot

I did ten runs of various standards that were within a couple of meters of the 
bench; the TimePod did not move the entire time. Each standard had a different 
looking PN plot, so I'm pretty sure the 120 Hz spur we see is the 5065A itself, 
not something in the lab.

File http://leapsecond.com/tmp/2018b-Ralph-2-pn.png is attached.

Fun fact -- there's a wide spur at ~2 Hz on the 5065A phase noise plot. What do 
you think that is? On a hunch I opened the front panel and reset the blinking 
amber battery alarm lamp, and voila, that noise went away. Makes sense when you 
think of the power variations associated with a blinking incandescent lamp.

/tvb

- Original Message - 
From: "Bob kb8tq" <kb...@n1k.org>
To: "Tom Van Baak" <t...@leapsecond.com>; "Discussion of precise time and 
frequency measurement" <time-nuts@febo.com>
Sent: Monday, February 26, 2018 7:00 AM
Subject: Re: [time-nuts] Allan variance by sine-wave fitting


Hi

One of the TimePods that I had access to in the past was particularly good
at telling you it was sitting on top of a power transformer. It didn’t matter a 
lot
which instrument the power transformer was in. For some weird reason it
was a good magnetometer at line frequencies. I never bothered to send it
back for analysis. Simply moving it onto the bench top (rather than stacked 
on top of this or that) would take care of the issue.

As far as I could tell, it was just the one unit that had the issue. None of the
others in the fleet of TimePods seemed to behave this way. Given that they
normally are very good at rejecting all sorts of crud and ground loops, it was 
somewhat odd to see. 

Bob

> On Feb 26, 2018, at 7:13 AM, Tom Van Baak <t...@leapsecond.com> wrote:
> 
>> BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?
> 
> The ADEV wiggles aren't visible with normal tau 1 s measurements. But since 
> the TimePod can go down to tau 1 ms, when I first measure a standard I like 
> to run at that resolution so effects like this show up. Once that's done, 1 
> ms resolution is overkill.
> 
> In this case it appears to be power supply noise. Attached are the ADEV, PN, 
> and TDEV plots.
> 
> The spur at 120 Hz is massive; there's also a bit at 240 Hz; almost nothing 
> at 60 Hz. When integrated these cause the bumps you see in the ADEV plot. 
> It's best seen as a bump at ~4 ms in the TDEV plot.
> 
> Note the cute little spur at 137 Hz. Not sure what causes the one at ~3630 Hz.
> 
> /tvb
> <5065a-adev.png><5065a-pn.png><5065a-tdev.png>

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Bob kb8tq
Hi

One of the TimePods that I had access to in the past was particularly good
at telling you it was sitting on top of a power transformer. It didn’t matter a 
lot
which instrument the power transformer was in. For some weird reason it
was a good magnetometer at line frequencies. I never bothered to send it
back for analysis. Simply moving it onto the bench top (rather than stacked 
on top of this or that) would take care of the issue.

As far as I could tell, it was just the one unit that had the issue. None of the
others in the fleet of TimePods seemed to behave this way. Given that they
normally are very good at rejecting all sorts of crud and ground loops, it was 
somewhat odd to see. 

Bob

> On Feb 26, 2018, at 7:13 AM, Tom Van Baak  wrote:
> 
>> BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?
> 
> The ADEV wiggles aren't visible with normal tau 1 s measurements. But since 
> the TimePod can go down to tau 1 ms, when I first measure a standard I like 
> to run at that resolution so effects like this show up. Once that's done, 1 
> ms resolution is overkill.
> 
> In this case it appears to be power supply noise. Attached are the ADEV, PN, 
> and TDEV plots.
> 
> The spur at 120 Hz is massive; there's also a bit at 240 Hz; almost nothing 
> at 60 Hz. When integrated these cause the bumps you see in the ADEV plot. 
> It's best seen as a bump at ~4 ms in the TDEV plot.
> 
> Note the cute little spur at 137 Hz. Not sure what causes the one at ~3630 Hz.
> 
> /tvb
> <5065a-adev.png><5065a-pn.png><5065a-tdev.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Tom Van Baak
> BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?

The ADEV wiggles aren't visible with normal tau 1 s measurements. But since the 
TimePod can go down to tau 1 ms, when I first measure a standard I like to run 
at that resolution so effects like this show up. Once that's done, 1 ms 
resolution is overkill.

In this case it appears to be power supply noise. Attached are the ADEV, PN, 
and TDEV plots.

The spur at 120 Hz is massive; there's also a bit at 240 Hz; almost nothing at 
60 Hz. When integrated these cause the bumps you see in the ADEV plot. It's 
best seen as a bump at ~4 ms in the TDEV plot.

Note the cute little spur at 137 Hz. Not sure what causes the one at ~3630 Hz.

/tvb
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Attila Kinali
On Mon, 26 Feb 2018 02:43:04 -0800
"Tom Van Baak"  wrote:

> This note is a follow-up to Ralph Devoe's ADEV posting earlier this month.
> 
> It's a long story but last week I was in the Bay Area with a car full of 
> batteries, BVA and cesium references, hp counters, and a TimePod. I was able 
> to double check Ralph's Digilent-based ADEV device [1] and also to 
> independently measure various frequency standards, including the actual 5065A 
> and 5071A that he used in his experiment.
> 
> For the range of tau where we overlap, his ADEV measurements closely match my 
> ADEV measurements. So that's very good news. His Digilent plot [2] and my 
> TimePod / TimeLab plot are attached.

Nice. Thank you!

BTW: Do you know the cause of the oscillations in the 5065 vs BVA plot?

> I'm sure more will come of his project over time and I hope to make 
> additional measurements using a wider variety of stable / unstable sources, 
> either down there in CA with another clock trip or up here in WA with a clone 
> of his prototype. It would be nice to further validate this wave fitting 
> technique, perhaps uncover and quantify subtle biases that depend on power 
> law noise (or ADC resolution, or sample rate, or sample size, etc.), and also 
> to explore environmental stability of the instrument.

For this, we would need a better understanding of what noise is 
mathematically and how it is affected by various components in
the signal path. But our mathematical description is lacking
at best (we only can descirbe white and 1/f^2 noise properly).


Attila Kinali
-- 
It is upon moral qualities that a society is ultimately founded. All 
the prosperity and technological sophistication in the world is of no 
use without that foundation.
 -- Miss Matheson, The Diamond Age, Neil Stephenson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-26 Thread Tom Van Baak
This note is a follow-up to Ralph Devoe's ADEV posting earlier this month.

It's a long story but last week I was in the Bay Area with a car full of 
batteries, BVA and cesium references, hp counters, and a TimePod. I was able to 
double check Ralph's Digilent-based ADEV device [1] and also to independently 
measure various frequency standards, including the actual 5065A and 5071A that 
he used in his experiment.

For the range of tau where we overlap, his ADEV measurements closely match my 
ADEV measurements. So that's very good news. His Digilent plot [2] and my 
TimePod / TimeLab plot are attached.

Note that his Digilent+Python setup isn't currently set up for continuous or 
short-tau measurement intervals -- plus I didn't have my isolation amplifiers 
-- so we didn't try a *concurrent* Digilent and TimePod measurement.

I'm sure more will come of his project over time and I hope to make additional 
measurements using a wider variety of stable / unstable sources, either down 
there in CA with another clock trip or up here in WA with a clone of his 
prototype. It would be nice to further validate this wave fitting technique, 
perhaps uncover and quantify subtle biases that depend on power law noise (or 
ADC resolution, or sample rate, or sample size, etc.), and also to explore 
environmental stability of the instrument.

I can tell Ralph put a lot of work into this project and I'm pleased he chose 
to share his results with time nuts. I mean, it's not every day that national 
lab or university level projects embrace our little community.

/tvb

[1] http://aip.scitation.org/doi/pdf/10.1063/1.5010140 (PDF)
[2] https://www.febo.com/pipermail/time-nuts/2018-February/108857.html

- Original Message - 
From: "Ralph Devoe" <rgde...@gmail.com>
To: <time-nuts@febo.com>
Sent: Monday, February 12, 2018 1:52 PM
Subject: [time-nuts] Allan variance by sine-wave fitting

> I've been comparing the ADEV produced by the sine-wave fitter (discussed
> last November) with that of a counter-based system. See the enclosed plot.
> We ran the test in Leo Hollberg's lab at Stanford, using a 5071a Cesium
> standard and my own 5065a Rubidium standard. Leo first measured the ADEV
> using a Keysight 53230a counter with a 100 second gate time. We then
> substituted the fitter for the counter and took 10,000 points at 20 second
> intervals. The two systems produce the same ADEV all the way down to the
> 5x10(-14) level where (presumably) temperature and pressure variations make
> the Rb wobble around a bit.
> 
> The revised paper has been published online at Rev. Sci. Instr.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

[time-nuts] Allan variance by sine-wave fitting

2018-02-13 Thread Ralph Devoe
Hi Tom,

to reply to your questions:

(1) Yes, I was also surprized at how good the 5065a is. It's a stock unit
that has apparently never been repaired or modified. I'm guessing it was
turned off for a long time (the seller said it was stored in an unheated
container in Alaska).  Leo's lab is very quiet, it's in a subbasement of a
new building with good  temperature control. The 5065a was installed there
in November and had run a month before the tests. Before that it was on for
> 6 months  in my shop.

(2)  The simultaneous test would be much better, but we were worried that
digital noise from the fitter would leak back through the power splitter
into the 53230a and provide noise. The ADC chip in the fitter is connected
directly to the input line using a wideband Minicircuits transformer, and
might be noisy. An isolation amplifier would fix this. Still this is a good
suggestion and  we should try this out.

The fitter is running continuously and we have about a dozen 200K second
runs stored. I have one of Corby's barometric chips and a data logger, so
we trying to correlate the temperature and pressure with the results.

Ralph
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2018-02-12 Thread Tom Van Baak
Hi Ralph,

Nice test. Two comments.

1) Your 5065A is working really well; is it one of Corby's "super" ones? What 
sort of environmental controls did you have during the run, if any. How long 
had the 5065A been powered up before you ran the test?

2) Is there a reason you didn't or couldn't make simultaneous measurements 
using both counters? That would have allowed an interesting study of the 
residuals. Plots of phase, frequency, spectrum, and ADEV of the residuals 
provides insight into how well the two counters match. This would also probably 
remove the awkward divergence of your two ADEV plots past tau 1 hour.

/tvb

- Original Message - 
From: "Ralph Devoe" <rgde...@gmail.com>
To: <time-nuts@febo.com>
Sent: Monday, February 12, 2018 1:52 PM
Subject: [time-nuts] Allan variance by sine-wave fitting


> I've been comparing the ADEV produced by the sine-wave fitter (discussed
> last November) with that of a counter-based system. See the enclosed plot.
> We ran the test in Leo Hollberg's lab at Stanford, using a 5071a Cesium
> standard and my own 5065a Rubidium standard. Leo first measured the ADEV
> using a Keysight 53230a counter with a 100 second gate time. We then
> substituted the fitter for the counter and took 10,000 points at 20 second
> intervals. The two systems produce the same ADEV all the way down to the
> 5x10(-14) level where (presumably) temperature and pressure variations make
> the Rb wobble around a bit.
> 
> The revised paper has been published online at Rev. Sci. Instr.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] Allan variance by sine-wave fitting

2018-02-12 Thread Ralph Devoe
I've been comparing the ADEV produced by the sine-wave fitter (discussed
last November) with that of a counter-based system. See the enclosed plot.
We ran the test in Leo Hollberg's lab at Stanford, using a 5071a Cesium
standard and my own 5065a Rubidium standard. Leo first measured the ADEV
using a Keysight 53230a counter with a 100 second gate time. We then
substituted the fitter for the counter and took 10,000 points at 20 second
intervals. The two systems produce the same ADEV all the way down to the
5x10(-14) level where (presumably) temperature and pressure variations make
the Rb wobble around a bit.

The revised paper has been published online at Rev. Sci. Instr.


comparison_dec_jan.pdf
Description: Adobe PDF document
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-30 Thread Mattia Rizzi
Hi,

@All
>True that the models depend on the noise statistics to be iid, that is
ergodic. That's the first assumption, and, while making the math tractable,
is the worst assumption.

I am not talking about intractable math. I'm talking about experimental
hypothesis vs theory.
I said that if you follow strictly the theory, you cannot claim anything on
any stuff you're measuring that may have a flicker process.
Therefore, any experimentalist suppose ergodicity in his measurements.

@Attila
>I do not see how ergocidity has anything to do with a spectrum analyzer.
You are measuring one single instance. Not multiple [...] And about
statistical significance: yes, you will have zero statistical significance
about the behaviour of the population of random variables, but you will
have a statistically significant number of samples of *one* realization of
the random variable. And that's what you work with.

Let me emphasize your sentence:  "you will have a statistically significant
number of samples of *one* realization of the random variable.".
This sentence is the meaning of ergodic process [
https://en.wikipedia.org/wiki/Ergodic_process]
If it's ergodic, you can characterize the stochastic process using only one
realization.
If it's not, your measurement is worthless, because there's no guarantee
that it contains all the statistical information.


>A flat signal cannot be the realization of a random variable with
a PSD ~ 1/f. At least not for a statisticially significant number
of time-samples

Without ergodicity you cannot claim it. You have to suppose ergodicity.

>And no, you do not need stationarity either. The spectrum analyzer has
a lower cut of frequency, which is given by its update rate and the
inner workings of the SA.

You need stationarity. Your SA takes several snapshots of the realization,
with an assumption: the characteristics of the stochastic process are not
changing over time. If the stochastic process is stationary, the
autocorrelation function doesn't depend over time. So you are authorized to
take several snapshots, compensate for the obseveration time (low cut-off
frequency) (*), and be sure that the estimated PSD will converge to
something meaningful.
If it's not stationary, it can change over time, therefore you are not
authorized to use a SA. It's like measuring the transfer function of a
time-varying filter (e.g. LTV system), the estimate doesn't converge.

cheers,
Mattia


(*) You can compensate the measured PSD to mimic the stochastic process
PSD, because the SA is a LTI system.

2017-11-28 20:23 GMT+01:00 djl :

> True that the models depend on the noise statistics to be iid, that is
> ergodic. That's the first assumption, and, while making the math tractable,
> is the worst assumption.
> Don
>
>
> On 2017-11-28 01:52, Mattia Rizzi wrote:
>
>> Hi
>>
>> This is true. But then the Fourier transformation integrates time from
>>>
>> minus infinity to plus infinity. Which isn't exactly realistic either.
>>
>> That's the theory. I am not arguing that it's realistic.
>>
>> Ergodicity breaks because the noise process is not stationary.
>>>
>>
>> I know but see the following.
>>
>> Well, any measurement is an estimate.
>>>
>>
>> It's not so simple. If you don't assume ergodicity, your spectrum analyzer
>> does not work, because:
>> 1) The spectrum analyzer takes several snapshots of your realization to
>> estimate the PSD. If it's not stationary, the estimate does not converge.
>> 2) It's just a single realization, therefore also a flat signal can be a
>> realization of 1/f flicker noise. Your measurement has *zero* statistical
>> significance.
>>
>>
>>
>> 2017-11-27 23:50 GMT+01:00 Attila Kinali :
>>
>> Hoi Mattia,
>>>
>>> On Mon, 27 Nov 2017 23:04:56 +0100
>>> Mattia Rizzi  wrote:
>>>
>>> > >To make the point a bit more clear. The above means that noise with
>>> > > a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
>>> > > and flicker frequency noise), the noise (aka random variable) is:
>>> > > 1) Not independently distributed
>>> > > 2) Not stationary
>>> > > 3) Not ergodic
>>> >
>>> > I think you got too much in theory. If you follow striclty the
>>> statistics
>>> > theory, you get nowhere.
>>> > You can't even talk about 1/f PSD, because Fourier doesn't converge
>>> over
>>> > infinite power signals.
>>>
>>> This is true. But then the Fourier transformation integrates time from
>>> minus infinity to plus infinity. Which isn't exactly realistic either.
>>> The power in 1/f noise is actually limited by the age of the universe.
>>> And quite strictly so. The power you have in 1/f is the same for every
>>> decade in frequency (or time) you go. The age of the universe is about
>>> 1e10 years, that's roughly 3e17 seconds, ie 17 decades of possible noise.
>>> If we assume something like a 1k carbon resistor you get something around
>>> of 1e-17W/decade of noise power (guestimate, not an exact calculation).
>>> That 

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-28 Thread djl
True that the models depend on the noise statistics to be iid, that is 
ergodic. That's the first assumption, and, while making the math 
tractable, is the worst assumption.

Don

On 2017-11-28 01:52, Mattia Rizzi wrote:

Hi


This is true. But then the Fourier transformation integrates time from

minus infinity to plus infinity. Which isn't exactly realistic either.

That's the theory. I am not arguing that it's realistic.


Ergodicity breaks because the noise process is not stationary.


I know but see the following.


Well, any measurement is an estimate.


It's not so simple. If you don't assume ergodicity, your spectrum 
analyzer

does not work, because:
1) The spectrum analyzer takes several snapshots of your realization to
estimate the PSD. If it's not stationary, the estimate does not 
converge.
2) It's just a single realization, therefore also a flat signal can be 
a
realization of 1/f flicker noise. Your measurement has *zero* 
statistical

significance.



2017-11-27 23:50 GMT+01:00 Attila Kinali :


Hoi Mattia,

On Mon, 27 Nov 2017 23:04:56 +0100
Mattia Rizzi  wrote:

> >To make the point a bit more clear. The above means that noise with
> > a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> > and flicker frequency noise), the noise (aka random variable) is:
> > 1) Not independently distributed
> > 2) Not stationary
> > 3) Not ergodic
>
> I think you got too much in theory. If you follow striclty the statistics
> theory, you get nowhere.
> You can't even talk about 1/f PSD, because Fourier doesn't converge over
> infinite power signals.

This is true. But then the Fourier transformation integrates time from
minus infinity to plus infinity. Which isn't exactly realistic either.
The power in 1/f noise is actually limited by the age of the universe.
And quite strictly so. The power you have in 1/f is the same for every
decade in frequency (or time) you go. The age of the universe is about
1e10 years, that's roughly 3e17 seconds, ie 17 decades of possible 
noise.
If we assume something like a 1k carbon resistor you get something 
around
of 1e-17W/decade of noise power (guestimate, not an exact 
calculation).
That means that resistor, had it been around ever since the universe 
was

created, then it would have converted 17*1e-17 = 2e-16W of heat into
electrical energy, on average, over the whole liftime of the universe.
That's not much :-)

> In fact, you are not allowed to take a realization, make several fft and
> claim that that's the PSD of the process. But that's what the spectrum
> analyzer does, because it's not a multiverse instrument.

Well, any measurement is an estimate.

> Every experimentalist suppose ergodicity on this kind of noise, otherwise
> you get nowhere.

Err.. no. Even if you assume that the spectrum tops off at some very
low frequency and does not increase anymore, ie that there is a finite
limit to noise power, even then ergodicity is not given.
Ergodicity breaks because the noise process is not stationary.
And assuming so for any kind of 1/f noise would be wrong.


Attila Kinali
--
The bad part of Zurich is where the degenerates
throw DARK chocolate at you.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.


--
Dr. Don Latham
PO Box 404, Frenchtown, MT, 59834
VOX: 406-626-4304

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-28 Thread Bob kb8tq
Hi

> On Nov 28, 2017, at 8:12 AM, Attila Kinali  wrote:
> 
> On Tue, 28 Nov 2017 09:52:37 +0100
> Mattia Rizzi  wrote:
> 
>>> Well, any measurement is an estimate.
>> 
>> It's not so simple. If you don't assume ergodicity, your spectrum analyzer
>> does not work, because:
>> 1) The spectrum analyzer takes several snapshots of your realization to
>> estimate the PSD. If it's not stationary, the estimate does not converge.
> 
> I do not see how ergocidity has anything to do with a spectrum analyzer.
> You are measuring one single instance. Not multiple.
> And no, you do not need stationarity either. The spectrum analyzer has
> a lower cut of frequency, which is given by its update rate and the
> inner workings of the SA. 

Coming back to ADEV, just as a SA has an upper frequency, lower frequency,
a couple of bandwidths, and persistence other devices have associated specs.
As these specs change, so do the readings you get. A noise floor measured with
a 100 KHz bandwidth is obviously different than one with a 1 KHz bandwidth. 
That’s easy to spot looking at noise. If you are looking at a sine wave tone, it
may not be so obvious. Finding the “right” signal to show this or that with ADEV
is not always easy….. With some of these techniques, people have been 
digging into this and that for decades. 

Bob


> 
>> 2) It's just a single realization, therefore also a flat signal can be a
>> realization of 1/f flicker noise. Your measurement has *zero* statistical
>> significance.
> 
> A flat signal cannot be the realization of a random variable with
> a PSD ~ 1/f. At least not for a statisticially significant number
> of time-samples. If it would be, then the random variable would not
> have a PSD of 1/f. If you go back to the definition of the PSD of
> a random variable X(ω,t), you will see it is independent of ω. 
> 
> And about statistical significance: yes, you will have zero statistical
> significance about the behaviour of the population of random variables,
> but you will have a statistically significant number of samples of *one*
> realization of the random variable. And that's what you work with.
> 
> 
>   Attila Kinali
> -- 
> It is upon moral qualities that a society is ultimately founded. All 
> the prosperity and technological sophistication in the world is of no 
> use without that foundation.
> -- Miss Matheson, The Diamond Age, Neil Stephenson
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-28 Thread Attila Kinali
On Tue, 28 Nov 2017 09:52:37 +0100
Mattia Rizzi  wrote:

> >Well, any measurement is an estimate.
> 
> It's not so simple. If you don't assume ergodicity, your spectrum analyzer
> does not work, because:
> 1) The spectrum analyzer takes several snapshots of your realization to
> estimate the PSD. If it's not stationary, the estimate does not converge.

I do not see how ergocidity has anything to do with a spectrum analyzer.
You are measuring one single instance. Not multiple.
And no, you do not need stationarity either. The spectrum analyzer has
a lower cut of frequency, which is given by its update rate and the
inner workings of the SA. 

> 2) It's just a single realization, therefore also a flat signal can be a
> realization of 1/f flicker noise. Your measurement has *zero* statistical
> significance.

A flat signal cannot be the realization of a random variable with
a PSD ~ 1/f. At least not for a statisticially significant number
of time-samples. If it would be, then the random variable would not
have a PSD of 1/f. If you go back to the definition of the PSD of
a random variable X(ω,t), you will see it is independent of ω. 

And about statistical significance: yes, you will have zero statistical
significance about the behaviour of the population of random variables,
but you will have a statistically significant number of samples of *one*
realization of the random variable. And that's what you work with.


Attila Kinali
-- 
It is upon moral qualities that a society is ultimately founded. All 
the prosperity and technological sophistication in the world is of no 
use without that foundation.
 -- Miss Matheson, The Diamond Age, Neil Stephenson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-28 Thread Mattia Rizzi
Hi

>This is true. But then the Fourier transformation integrates time from
minus infinity to plus infinity. Which isn't exactly realistic either.

That's the theory. I am not arguing that it's realistic.

>Ergodicity breaks because the noise process is not stationary.

I know but see the following.

>Well, any measurement is an estimate.

It's not so simple. If you don't assume ergodicity, your spectrum analyzer
does not work, because:
1) The spectrum analyzer takes several snapshots of your realization to
estimate the PSD. If it's not stationary, the estimate does not converge.
2) It's just a single realization, therefore also a flat signal can be a
realization of 1/f flicker noise. Your measurement has *zero* statistical
significance.



2017-11-27 23:50 GMT+01:00 Attila Kinali :

> Hoi Mattia,
>
> On Mon, 27 Nov 2017 23:04:56 +0100
> Mattia Rizzi  wrote:
>
> > >To make the point a bit more clear. The above means that noise with
> > > a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> > > and flicker frequency noise), the noise (aka random variable) is:
> > > 1) Not independently distributed
> > > 2) Not stationary
> > > 3) Not ergodic
> >
> > I think you got too much in theory. If you follow striclty the statistics
> > theory, you get nowhere.
> > You can't even talk about 1/f PSD, because Fourier doesn't converge over
> > infinite power signals.
>
> This is true. But then the Fourier transformation integrates time from
> minus infinity to plus infinity. Which isn't exactly realistic either.
> The power in 1/f noise is actually limited by the age of the universe.
> And quite strictly so. The power you have in 1/f is the same for every
> decade in frequency (or time) you go. The age of the universe is about
> 1e10 years, that's roughly 3e17 seconds, ie 17 decades of possible noise.
> If we assume something like a 1k carbon resistor you get something around
> of 1e-17W/decade of noise power (guestimate, not an exact calculation).
> That means that resistor, had it been around ever since the universe was
> created, then it would have converted 17*1e-17 = 2e-16W of heat into
> electrical energy, on average, over the whole liftime of the universe.
> That's not much :-)
>
> > In fact, you are not allowed to take a realization, make several fft and
> > claim that that's the PSD of the process. But that's what the spectrum
> > analyzer does, because it's not a multiverse instrument.
>
> Well, any measurement is an estimate.
>
> > Every experimentalist suppose ergodicity on this kind of noise, otherwise
> > you get nowhere.
>
> Err.. no. Even if you assume that the spectrum tops off at some very
> low frequency and does not increase anymore, ie that there is a finite
> limit to noise power, even then ergodicity is not given.
> Ergodicity breaks because the noise process is not stationary.
> And assuming so for any kind of 1/f noise would be wrong.
>
>
> Attila Kinali
> --
> The bad part of Zurich is where the degenerates
> throw DARK chocolate at you.
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-28 Thread Attila Kinali
On Mon, 27 Nov 2017 23:50:22 +0100
Attila Kinali  wrote:

> > Every experimentalist suppose ergodicity on this kind of noise, otherwise
> > you get nowhere.
> 
> Err.. no. Even if you assume that the spectrum tops off at some very
> low frequency and does not increase anymore, ie that there is a finite
> limit to noise power, even then ergodicity is not given.
> Ergodicity breaks because the noise process is not stationary.
> And assuming so for any kind of 1/f noise would be wrong.

Addendum: the reason why this is wrong is because assuming noise
is ergodic means it is stationary. But the reason why we have to
treat 1/f noise specially is exactly because it is not stationary.
I.e. we lose the one property in our model that we need to make
the model realistic.

Attila Kinali

-- 
The bad part of Zurich is where the degenerates
throw DARK chocolate at you.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Magnus Danielson

Hi Jim,

On 11/28/2017 12:03 AM, jimlux wrote:

On 11/27/17 2:45 PM, Magnus Danielson wrote:



There is nothing wrong about attempting new approaches, or even just 
test and idea and see how it pans out. You should then compare it to a 
number of other approaches, and as you test things, you should analyze 
the same data with different methods. Prototyping that in Python is 
fine, but in order to analyze it, you need to be careful about the 
details.


I would consider one just doing the measurements and then try 
different post-processings and see how those vary.
Another paper then takes up on that and attempts analysis that matches 
the numbers from actual measurements.


So, we might provide tough love, but there is a bit of experience 
behind it, so it should be listened to carefully.





It is tough to come up with good artificial test data - the literature 
on generating "noise samples" is significantly thinner than the 
literature on measuring the noise.


Agree completely. It's really the 1/f flicker noise which is hard.
The white phase and frequency noise forms is trivial in comparison, but 
also needs its care to detail.


Enough gaussian is sometimes harder than elsewhere. I always try to 
consider it a possible limitation.


Enough random is another issue. What is the length of the noise source, 
what is the characteristics?


When it comes to measuring actual signals with actual ADCs, there's also 
a number of traps - you can design a nice approach, using the SNR/ENOB 
data from the data sheet, and get seemingly good data.


The challenge is really in coming up with good *tests* of your 
measurement technique that show that it really is giving you what you 
think it is.


A trivial example is this (not a noise measuring problem, per se) -

You need to measure the power of a received signal - if the signal is 
narrow band, and high SNR, then the bandwidth of the measuring system 
(be it a FFT or conventional spectrum analyzer) doesn't make a lot of 
difference - the precise filter shape is non-critical.  The noise power 
that winds up in the measurement bandwidth is small, for instance.


But now, let's say that the signal is a bit wider band or lower SNR or 
you're uncertain of its exact frequency, then the shape of the filter 
starts to make a big difference.


Now, let’s look at a system where there’s some decimation involved - any 
decimation raises the prospect of “out of band signals” aliasing into 
the post decimation passband.  Now, all of a sudden, the filtering 
before the decimator starts to become more important. And the number of 
bits you have to carry starts being more important.


There is a risk of wasting bits too early when decimating. The trouble 
comes when the actual signal is way below the noise and you want to 
bring it out in post-processing, the limit of dynamic will haunt you.

This have been shown many times before.

Also, noise and quantization has an interesting interaction.

It actually took a fair amount of work to *prove* that a system I was 
working on

a) accurately measured the signal (in the presence of other large signals)
b) that there weren’t numerical issues causing the strong signal to show 
up in the low level signal filter bins

c) that the measured noise floor matched the expectation


It's tricky business indeed. The cross-correlation technique could 
potentially measure below it's own noise-floor. Turns out it was very 
very VERY hard to do that safely. It remains a research topic. At best 
we just barely got to work around the issue. That is indeed a high 
dynamic setup.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Jim Lux



-Original Message-
>From: Magnus Danielson <mag...@rubidium.dyndns.org>
>Sent: Nov 27, 2017 2:45 PM
>To: time-nuts@febo.com
>Cc: mag...@rubidium.se
>Subject: Re: [time-nuts] Allan variance by sine-wave fitting
>
>Hi,



>
>There is nothing wrong about attempting new approaches, or even just 
>test and idea and see how it pans out. You should then compare it to a 
>number of other approaches, and as you test things, you should analyze 
>the same data with different methods. Prototyping that in Python is 
>fine, but in order to analyze it, you need to be careful about the details.
>
>I would consider one just doing the measurements and then try different 
>post-processings and see how those vary.
>Another paper then takes up on that and attempts analysis that matches 
>the numbers from actual measurements.
>
>So, we might provide tough love, but there is a bit of experience behind 
>it, so it should be listened to carefully.
>



It is tough to come up with good artificial test data - the literature on 
generating "noise samples" is significantly thinner than the literature on 
measuring the noise.

When it comes to measuring actual signals with actual ADCs, there's also a 
number of traps - you can design a nice approach, using the SNR/ENOB data from 
the data sheet, and get seemingly good data. 

The challenge is really in coming up with good *tests* of your measurement 
technique that show that it really is giving you what you think it is.

A trivial example is this (not a noise measuring problem, per se) - 

You need to measure the power of a received signal - if the signal is narrow 
band, and high SNR, then the bandwidth of the measuring system (be it a FFT or 
conventional spectrum analyzer) doesn't make a lot of difference - the precise 
filter shape is non-critical.  The noise power that winds up in the measurement 
bandwidth is small, for instance.

But now, let's say that the signal is a bit wider band or lower SNR or you're 
uncertain of its exact frequency, then the shape of the filter starts to make a 
big difference.  

Now, let’s look at a system where there’s some decimation involved - any 
decimation raises the prospect of “out of band signals” (such as the noise) 
aliasing into the post decimation passband.  Now, all of a sudden, the 
filtering before the decimator starts to become more important. And the number 
of bits you have to carry starts being more important.  And some assumptions 
about noise being random and uncorrelated start to fall apart.


It actually took a fair amount of work to *prove* that a recent system I was 
working on
a) accurately measured the signal (in the presence of other large signals)
b) that there weren’t numerical issues causing the strong signal to show up in 
the low level signal filter bins
c) that the measured noise floor matched the expectation
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread jimlux

On 11/27/17 2:45 PM, Magnus Danielson wrote:



There is nothing wrong about attempting new approaches, or even just 
test and idea and see how it pans out. You should then compare it to a 
number of other approaches, and as you test things, you should analyze 
the same data with different methods. Prototyping that in Python is 
fine, but in order to analyze it, you need to be careful about the details.


I would consider one just doing the measurements and then try different 
post-processings and see how those vary.
Another paper then takes up on that and attempts analysis that matches 
the numbers from actual measurements.


So, we might provide tough love, but there is a bit of experience behind 
it, so it should be listened to carefully.





It is tough to come up with good artificial test data - the literature 
on generating "noise samples" is significantly thinner than the 
literature on measuring the noise.


When it comes to measuring actual signals with actual ADCs, there's also 
a number of traps - you can design a nice approach, using the SNR/ENOB 
data from the data sheet, and get seemingly good data.


The challenge is really in coming up with good *tests* of your 
measurement technique that show that it really is giving you what you 
think it is.


A trivial example is this (not a noise measuring problem, per se) -

You need to measure the power of a received signal - if the signal is 
narrow band, and high SNR, then the bandwidth of the measuring system 
(be it a FFT or conventional spectrum analyzer) doesn't make a lot of 
difference - the precise filter shape is non-critical.  The noise power 
that winds up in the measurement bandwidth is small, for instance.


But now, let's say that the signal is a bit wider band or lower SNR or 
you're uncertain of its exact frequency, then the shape of the filter 
starts to make a big difference.


Now, let’s look at a system where there’s some decimation involved - any 
decimation raises the prospect of “out of band signals” aliasing into 
the post decimation passband.  Now, all of a sudden, the filtering 
before the decimator starts to become more important. And the number of 
bits you have to carry starts being more important.


It actually took a fair amount of work to *prove* that a system I was 
working on

a) accurately measured the signal (in the presence of other large signals)
b) that there weren’t numerical issues causing the strong signal to show 
up in the low level signal filter bins

c) that the measured noise floor matched the expectation
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Attila Kinali
Hoi Mattia,

On Mon, 27 Nov 2017 23:04:56 +0100
Mattia Rizzi  wrote:

> >To make the point a bit more clear. The above means that noise with
> > a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> > and flicker frequency noise), the noise (aka random variable) is:
> > 1) Not independently distributed
> > 2) Not stationary
> > 3) Not ergodic
> 
> I think you got too much in theory. If you follow striclty the statistics
> theory, you get nowhere.
> You can't even talk about 1/f PSD, because Fourier doesn't converge over
> infinite power signals.

This is true. But then the Fourier transformation integrates time from
minus infinity to plus infinity. Which isn't exactly realistic either.
The power in 1/f noise is actually limited by the age of the universe.
And quite strictly so. The power you have in 1/f is the same for every
decade in frequency (or time) you go. The age of the universe is about
1e10 years, that's roughly 3e17 seconds, ie 17 decades of possible noise.
If we assume something like a 1k carbon resistor you get something around
of 1e-17W/decade of noise power (guestimate, not an exact calculation).
That means that resistor, had it been around ever since the universe was
created, then it would have converted 17*1e-17 = 2e-16W of heat into
electrical energy, on average, over the whole liftime of the universe.
That's not much :-)

> In fact, you are not allowed to take a realization, make several fft and
> claim that that's the PSD of the process. But that's what the spectrum
> analyzer does, because it's not a multiverse instrument.

Well, any measurement is an estimate.

> Every experimentalist suppose ergodicity on this kind of noise, otherwise
> you get nowhere.

Err.. no. Even if you assume that the spectrum tops off at some very
low frequency and does not increase anymore, ie that there is a finite
limit to noise power, even then ergodicity is not given.
Ergodicity breaks because the noise process is not stationary.
And assuming so for any kind of 1/f noise would be wrong.


Attila Kinali
-- 
The bad part of Zurich is where the degenerates
throw DARK chocolate at you.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Magnus Danielson

Hi,

On 11/27/2017 07:37 PM, Attila Kinali wrote:

Moin Ralph,

On Sun, 26 Nov 2017 21:33:03 -0800
Ralph Devoe  wrote:


The issue I intended to raise, but which I'm not sure I stated clearly
enough, is a conjecture: Is least-square fitting as efficient as any of the
other direct-digital or SDR techniques?


You stated that, yes, but it's well hidden in the paper.


Least-square fitting done right is very efficient.

A good comparison would illustrate that, but it is also expected. What 
does differ is how well adapted different approaches is.



If the conjecture is true then the SDR
technique must be viewed as one several equivalent algorithms for
estimating phase. Note that the time deviation for a single ADC channel in
the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
This suggests that the conjecture is true.


Yes, you get to similar values, if you extrapolate from the TDEV
data in S Fig3c down to 40µs that you used. BUT: while S see
a decrease of the TDEV consistend with white phase noise until they
hit the flicker phase noise floor at about a tau of 1ms, your data
does not show such a decrease (or at least I didn't see it).


There is a number of ways to do this.
There is even a number of ways that least square processing can be applied.

The trouble with least square estimators is that you do not maintain the 
improvement for longer taus, and the paper PDEV estimator does not 
either. That motivated me to develop a decimator method for phase, 
frequency and PDEV that extends in post-processing, which I presented 
last year.



Other criticisms seem off the mark:

Several people raised the question of the filter factor of the least-square
fit.  First, if there is a filtering bias due to the fit, it would be the
same for signal and reference channels and should cancel. Second, even if
there is a bias, it would have to fluctuate from second to second to cause
a frequency error.


Bob answered that already, and I am pretty sure that Magnus will comment
on it as well. Both are better suited than me to go into the details of this.


Yes, see my comment.

Least square estimator for phase and frequency applies a linear ramp 
weighing on phase samples or parabolic curve weighing on frequency 
samples. These filter, and the bandwidth of the filter depends on the 
sample count and time between samples. As sample count increases, the 
bandwidth goes way down.



Third, the Monte Carlo results show no bias. The output
of the Monte Carlo system is the difference between the fit result and the
known MC input. Any fitting bias would show up in the difference, but there
is none.


Sorry, but this is simply not the case. If I undestood your simulations
correctly (you give very little information about them), you used additive
Gaussian i.i.d noise on top of the signal. Of course, if you add Gaussian
i.i.d noise with zero mean, you will get zero bias in a linear least squares
fit. But, as Magnus and I have tried to tell you, noises we see in this area
are not necessarily Gauss i.i.d. Only white phase noise is Gauss i.i.d.
Most of the techniques we use in statistics implicitly assume Gauss i.i.d.


Go back to the IEEE Special issue on time and frequency from february 
1966 you find a nice set of articles. In there is among others David 
Allans article on 2-sample variance that later became Allans variance 
and now Allan variance. Another article is the short but classic write 
up of another youngster, David Leeson, which summarize a model for phase 
noise generation which we today refer to the Leeson model. To deeper 
appreciate the Leeson model, check out the phase-noise book of Enrico 
Rubiola, which gives you some insight. If you want to make designs, 
there is more to it, so several other papers needs to be read, but here 
you just need to understand that you get 3 or 4 types of noises out of 
an oscillator, and the trouble with them is that noise does not converge 
like your normal textbook on statistics would make you assume. The RMS 
estimator on your frequency estimation does not converge, in fact it 
goes astray and vary with the amount of samples. This was already a 
known problem, but the solution came with Dave Allans paper. It in fact 
includes a function we later would refer to a bias function that depends 
on the number of samples taken. This motivates the conversion from one M 
sample variance to a 2-sample variance and a N sample variance to a 
2-sample variance such that they can be compared. The bias function 
varies with the number of samples and the dominant noise-form.


The noiseforms are strange and their action on statistics is strange.
You need to understand how they interact with your measurement tool, and 
that well, in the end you need to test all noiseforms.



Attila says that I exaggerate the difficulty of programming an FPGA. Not
so. At work we give experts 1-6 months for a new FPGA design. We recently
ported some code from a Spartan 3 to a 

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Mattia Rizzi
I'm talking about flicker noise processes

2017-11-27 23:04 GMT+01:00 Mattia Rizzi :

> Hi,
>
> >To make the point a bit more clear. The above means that noise with
> a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> and flicker frequency noise), the noise (aka random variable) is:
> 1) Not independently distributed
> 2) Not stationary
> 3) Not ergodic
>
> I think you got too much in theory. If you follow striclty the statistics
> theory, you get nowhere.
> You can't even talk about 1/f PSD, because Fourier doesn't converge over
> infinite power signals.
> In fact, you are not allowed to take a realization, make several fft and
> claim that that's the PSD of the process. But that's what the spectrum
> analyzer does, because it's not a multiverse instrument.
> Every experimentalist suppose ergodicity on this kind of noise, otherwise
> you get nowhere.
>
> cheers,
> Mattia
>
> 2017-11-27 22:50 GMT+01:00 Attila Kinali :
>
>> On Mon, 27 Nov 2017 19:37:11 +0100
>> Attila Kinali  wrote:
>>
>> > X(t): Random variable, Gauss distributed, zero mean, i.i.d (ie PSD =
>> const)
>> > Y(t): Random variable, Gauss distributed, zero mean, PSD ~ 1/f
>> > Two time points: t_0 and t, where t > t_0
>> >
>> > Then:
>> >
>> > E[X(t) | X(t_0)] = 0
>> > E[Y(t) | Y(t_0)] = Y(t_0)
>> >
>> > Ie. the expectation of X will be zero, no matter whether you know any
>> sample
>> > of the random variable. But for Y, the expectation is biased to the last
>> > sample you have seen, ie it is NOT zero for anything where t>0.
>> > A consequence of this is, that if you take a number of samples, the
>> average
>> > will not approach zero for the limit of the number of samples going to
>> infinity.
>> > (For details see the theory of fractional Brownian motion, especially
>> > the papers by Mandelbrot and his colleagues)
>>
>> To make the point a bit more clear. The above means that noise with
>> a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
>> and flicker frequency noise), the noise (aka random variable) is:
>>
>> 1) Not independently distributed
>> 2) Not stationary
>> 3) Not ergodic
>>
>>
>> Where 1) means there is a correlation between samples, ie if you know a
>> sample, you can predict what the next one will be. 2) means that the
>> properties of the random variable change over time. Note this is a
>> stronger non-stationary than the cyclostationarity that people in
>> signal theory and communication systems often assume, when they go
>> for non-stationary system characteristics. And 3) means that
>> if you take lots of samples from one random process, you will get a
>> different distribution than when you take lots of random processes
>> and take one sample each. Ergodicity is often implicitly assumed
>> in a lot of analysis, without people being aware of it. It is one
>> of the things that a lot of random processes in nature adhere to
>> and thus is ingrained in our understanding of the world. But noise
>> process in electronics, atomic clocks, fluid dynamics etc are not
>> ergodic in general.
>>
>> As sidenote:
>>
>> 1) holds true for a > 0 (ie anything but white noise).
>> I am not yet sure when stationarity or ergodicity break, but my guess
>> would
>> be, that both break with a=1 (ie flicker noise). But that's only an
>> assumption
>> I have come to. I cannot prove or disprove this.
>>
>> For 1 <= a < 3 (between flicker phase and flicker frequency, including
>> flicker
>> phase, not including flicker frequency), the increments (ie the difference
>> between X(t) and X(t+1)) are stationary.
>>
>> Attila Kinali
>>
>>
>> --
>> May the bluebird of happiness twiddle your bits.
>>
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/m
>> ailman/listinfo/time-nuts
>> and follow the instructions there.
>>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Mattia Rizzi
Hi,

>To make the point a bit more clear. The above means that noise with
a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
and flicker frequency noise), the noise (aka random variable) is:
1) Not independently distributed
2) Not stationary
3) Not ergodic

I think you got too much in theory. If you follow striclty the statistics
theory, you get nowhere.
You can't even talk about 1/f PSD, because Fourier doesn't converge over
infinite power signals.
In fact, you are not allowed to take a realization, make several fft and
claim that that's the PSD of the process. But that's what the spectrum
analyzer does, because it's not a multiverse instrument.
Every experimentalist suppose ergodicity on this kind of noise, otherwise
you get nowhere.

cheers,
Mattia

2017-11-27 22:50 GMT+01:00 Attila Kinali :

> On Mon, 27 Nov 2017 19:37:11 +0100
> Attila Kinali  wrote:
>
> > X(t): Random variable, Gauss distributed, zero mean, i.i.d (ie PSD =
> const)
> > Y(t): Random variable, Gauss distributed, zero mean, PSD ~ 1/f
> > Two time points: t_0 and t, where t > t_0
> >
> > Then:
> >
> > E[X(t) | X(t_0)] = 0
> > E[Y(t) | Y(t_0)] = Y(t_0)
> >
> > Ie. the expectation of X will be zero, no matter whether you know any
> sample
> > of the random variable. But for Y, the expectation is biased to the last
> > sample you have seen, ie it is NOT zero for anything where t>0.
> > A consequence of this is, that if you take a number of samples, the
> average
> > will not approach zero for the limit of the number of samples going to
> infinity.
> > (For details see the theory of fractional Brownian motion, especially
> > the papers by Mandelbrot and his colleagues)
>
> To make the point a bit more clear. The above means that noise with
> a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
> and flicker frequency noise), the noise (aka random variable) is:
>
> 1) Not independently distributed
> 2) Not stationary
> 3) Not ergodic
>
>
> Where 1) means there is a correlation between samples, ie if you know a
> sample, you can predict what the next one will be. 2) means that the
> properties of the random variable change over time. Note this is a
> stronger non-stationary than the cyclostationarity that people in
> signal theory and communication systems often assume, when they go
> for non-stationary system characteristics. And 3) means that
> if you take lots of samples from one random process, you will get a
> different distribution than when you take lots of random processes
> and take one sample each. Ergodicity is often implicitly assumed
> in a lot of analysis, without people being aware of it. It is one
> of the things that a lot of random processes in nature adhere to
> and thus is ingrained in our understanding of the world. But noise
> process in electronics, atomic clocks, fluid dynamics etc are not
> ergodic in general.
>
> As sidenote:
>
> 1) holds true for a > 0 (ie anything but white noise).
> I am not yet sure when stationarity or ergodicity break, but my guess would
> be, that both break with a=1 (ie flicker noise). But that's only an
> assumption
> I have come to. I cannot prove or disprove this.
>
> For 1 <= a < 3 (between flicker phase and flicker frequency, including
> flicker
> phase, not including flicker frequency), the increments (ie the difference
> between X(t) and X(t+1)) are stationary.
>
> Attila Kinali
>
>
> --
> May the bluebird of happiness twiddle your bits.
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Attila Kinali
On Mon, 27 Nov 2017 19:37:11 +0100
Attila Kinali  wrote:

> X(t): Random variable, Gauss distributed, zero mean, i.i.d (ie PSD = const)
> Y(t): Random variable, Gauss distributed, zero mean, PSD ~ 1/f
> Two time points: t_0 and t, where t > t_0
> 
> Then:
> 
> E[X(t) | X(t_0)] = 0
> E[Y(t) | Y(t_0)] = Y(t_0)
> 
> Ie. the expectation of X will be zero, no matter whether you know any sample
> of the random variable. But for Y, the expectation is biased to the last
> sample you have seen, ie it is NOT zero for anything where t>0.
> A consequence of this is, that if you take a number of samples, the average
> will not approach zero for the limit of the number of samples going to 
> infinity.
> (For details see the theory of fractional Brownian motion, especially
> the papers by Mandelbrot and his colleagues)

To make the point a bit more clear. The above means that noise with
a PSD of the form 1/f^a for a>=1 (ie flicker phase, white frequency
and flicker frequency noise), the noise (aka random variable) is:

1) Not independently distributed
2) Not stationary
3) Not ergodic 


Where 1) means there is a correlation between samples, ie if you know a
sample, you can predict what the next one will be. 2) means that the
properties of the random variable change over time. Note this is a
stronger non-stationary than the cyclostationarity that people in
signal theory and communication systems often assume, when they go
for non-stationary system characteristics. And 3) means that
if you take lots of samples from one random process, you will get a
different distribution than when you take lots of random processes
and take one sample each. Ergodicity is often implicitly assumed
in a lot of analysis, without people being aware of it. It is one
of the things that a lot of random processes in nature adhere to
and thus is ingrained in our understanding of the world. But noise
process in electronics, atomic clocks, fluid dynamics etc are not
ergodic in general.

As sidenote:

1) holds true for a > 0 (ie anything but white noise).
I am not yet sure when stationarity or ergodicity break, but my guess would
be, that both break with a=1 (ie flicker noise). But that's only an assumption
I have come to. I cannot prove or disprove this.

For 1 <= a < 3 (between flicker phase and flicker frequency, including flicker
phase, not including flicker frequency), the increments (ie the difference
between X(t) and X(t+1)) are stationary.

Attila Kinali


-- 
May the bluebird of happiness twiddle your bits.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Attila Kinali
Moin Ralph,

On Sun, 26 Nov 2017 21:33:03 -0800
Ralph Devoe  wrote:

> The issue I intended to raise, but which I'm not sure I stated clearly
> enough, is a conjecture: Is least-square fitting as efficient as any of the
> other direct-digital or SDR techniques? 

You stated that, yes, but it's well hidden in the paper.

> Is the resolution of any
> direct-digital system limited by (a) the effective number of bits of the
> ADC and (b) the number of samples averaged? Thanks to Attila for reminding
> me of the Sherman and Joerdens paper, which I have not read carefully
> before. In their appendix Eq. A6 they derive a result which may or may not
> be related to Eq. 6 in my paper. 

They are related, but only accidentally. S derive a lower bound for the
Allan variance from the SNR. You try to derive the lower bound
for the Allan variance from the quantization noise. That you end up
with similar looking formulas comes from the fact that both methods
have a scaling in 1/sqrt(X) where X is the number of samples taken.
though S use the number of phase estimates, while you use the 
number of ADC samples. While related, they are not the same.
And you both have a scaling of 1/(2*pi*f) to get from phase to time.
You will notice that your formla contains a 2^N term, with N being
the number of bits, but which you derive from the SNR (ENOB).
It's easy to show that the SNR due to quantization noise 
is proportional to size of an LSB, ie. SNR ~ 2^N. If we now put in
all variables and substitute 2^N by SNR will see:

S: sigma >= 1/(2*pi*f) * sqrt(2/(SNR*N_sample))   (note the inequality!)
Yours: sigma ~= 1/(2*pi*f) * 1/SNR * sqrt(1/M)  (up to a constant)

Note three differences:
1) S scales with 1/sqrt(SNR) while yours scales with 1/SNR
2) S have a tau depndence implicit in the formula due to N_sample, you do not.
3) S is a lower bound, yours an approximation (or claims to be).

> If the conjecture is true then the SDR
> technique must be viewed as one several equivalent algorithms for
> estimating phase. Note that the time deviation for a single ADC channel in
> the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
> This suggests that the conjecture is true.

Yes, you get to similar values, if you extrapolate from the TDEV
data in S Fig3c down to 40µs that you used. BUT: while S see
a decrease of the TDEV consistend with white phase noise until they
hit the flicker phase noise floor at about a tau of 1ms, your data
does not show such a decrease (or at least I didn't see it).

> Other criticisms seem off the mark:
> 
> Several people raised the question of the filter factor of the least-square
> fit.  First, if there is a filtering bias due to the fit, it would be the
> same for signal and reference channels and should cancel. Second, even if
> there is a bias, it would have to fluctuate from second to second to cause
> a frequency error. 

Bob answered that already, and I am pretty sure that Magnus will comment
on it as well. Both are better suited than me to go into the details of this.

> Third, the Monte Carlo results show no bias. The output
> of the Monte Carlo system is the difference between the fit result and the
> known MC input. Any fitting bias would show up in the difference, but there
> is none.

Sorry, but this is simply not the case. If I undestood your simulations
correctly (you give very little information about them), you used additive
Gaussian i.i.d noise on top of the signal. Of course, if you add Gaussian
i.i.d noise with zero mean, you will get zero bias in a linear least squares
fit. But, as Magnus and I have tried to tell you, noises we see in this area
are not necessarily Gauss i.i.d. Only white phase noise is Gauss i.i.d.
Most of the techniques we use in statistics implicitly assume Gauss i.i.d.
To show you that things fail in quite interesting way assume this:

X(t): Random variable, Gauss distributed, zero mean, i.i.d (ie PSD = const)
Y(t): Random variable, Gauss distributed, zero mean, PSD ~ 1/f
Two time points: t_0 and t, where t > t_0

Then:

E[X(t) | X(t_0)] = 0
E[Y(t) | Y(t_0)] = Y(t_0)

Ie. the expectation of X will be zero, no matter whether you know any sample
of the random variable. But for Y, the expectation is biased to the last
sample you have seen, ie it is NOT zero for anything where t>0.
A consequence of this is, that if you take a number of samples, the average
will not approach zero for the limit of the number of samples going to infinity.
(For details see the theory of fractional Brownian motion, especially
the papers by Mandelbrot and his colleagues)

A PSD ~ 1/f is flicker phase noise, which usually starts to be relevant
in our systems for sampling times between 1µs (for high frequency stuff)
and 1-100s (high stability oscillators and atomic clocks). Unfortunately,
the Allan deviation does not discern between white phase noise and
flicker phase noise, so it's not possible to see in your plots where
flicker noise becomes relevant 

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Magnus Danielson

Hi,

On 11/27/2017 04:05 PM, Bob kb8tq wrote:

Hi


On Nov 27, 2017, at 12:33 AM, Ralph Devoe  wrote:

Here's a short reply to the comments of Bob, Attila, Magnus, and others.
Thanks for reading the paper carefully. I appreciate it. Some of the
comments are quite interesting, other seem off the mark. Let's start with
an interesting one:

The issue I intended to raise, but which I'm not sure I stated clearly
enough, is a conjecture: Is least-square fitting as efficient as any of the
other direct-digital or SDR techniques? Is the resolution of any
direct-digital system limited by (a) the effective number of bits of the
ADC and (b) the number of samples averaged? Thanks to Attila for reminding
me of the Sherman and Joerdens paper, which I have not read carefully
before. In their appendix Eq. A6 they derive a result which may or may not
be related to Eq. 6 in my paper. If the conjecture is true then the SDR
technique must be viewed as one several equivalent algorithms for
estimating phase. Note that the time deviation for a single ADC channel in
the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
This suggests that the conjecture is true.

Other criticisms seem off the mark:

Several people raised the question of the filter factor of the least-square
fit.  First, if there is a filtering bias due to the fit, it would be the
same for signal and reference channels and should cancel.


Errr … no.

There are earlier posts about this on the list. The *objective* of ADEV is to 
capture
noise. Any filtering process rejects noise. That is true in DMTD and all the 
other approaches.
Presentations made in papers since the 1970’s demonstrate that it very much does
not cancel out or drop out. It impacts the number you get for ADEV. You have 
thrown away
part of what you set out to measure.


It's obvious already in David Allan's 1966 paper.
It's been verified and "re-discovered" a number of times.

You should re-read what I wrote, as it gives you the basic hints you 
should be listening to.



Yes, ADEV is a bit fussy in this regard. Many of the other “DEV” measurements 
are also
fussy. This is at the heart of why many counters (when they estimate frequency) 
can not
be used directly for ADEV. Any technique that is proposed for ADEV needs to be 
analyzed.


For me it's not fuzzy, or rather, the things I know about these and 
their coloring is one thing and the things I think is fuzzy is the stuff 
I haven't published articles on yet.



The point here is not that filtering makes the measurement invalid. The point 
is that the
filter’s impact needs to be evaluated and stated. That is the key part of the 
proposed technique
that is missing at this point.


The traditional analysis is that the bandwidth derives from the nyquist 
frequency of sampling, as expressed in David own words when I discussed 
it last year "We had to, since that was the counters we had".


Staffan Johansson of Philips/Fluke/Pendulum wrote a paper on using 
linear regression, which is just another name for least square fit, 
frequency estimation and it's use in ADEV measurements.


Now, Prof. Enrico Rubiola realized that something was fishy, and it 
indeed is, as the pre-filtering with fixed tau that linear regression / 
least square achieves colors the low-tau measurements, but not the 
high-tau measurements. This is because the frequency sensitivity of high 
tau ADEVs becomes so completely within the passband of the pre-filter 
that it does not care, but for low tau the prefiltering dominates and 
produces lower values than it should, a biasing effect.


He also realized that the dynamic filter of MDEV, where the filter 
changes with tau, would be interesting and that is how he came about to 
come up with the parabolic deviation PDEV.


Now, the old wisdom is that you need to publish the bandwidth of the 
pre-filtering of the channel, or else the noise estimation will not be 
proper.


Look at the Allan Deviation Wikipedia article for a first discussion on 
bias functions, they are all aspects of biasing of various forms of 
processing.


The lesson to be learned here is that there is a number of different 
ways that you can bias your measurements such that your ADEV values will 
no longer be "valid" to correctly performed ADEV, and thus the ability 
to compare them to judge levels of noise and goodness-values is being lost.


I know it is a bit much to take in at first, but trust me that this is 
important stuff. So be careful about wielding "of the mark", this is the 
stuff that you need to be careful about that we kindly try to advice you 
on, and you should take the lesson when it's free.


Cheers,
Magnus



Bob



Second, even if
there is a bias, it would have to fluctuate from second to second to cause
a frequency error. Third, the Monte Carlo results show no bias. The output
of the Monte Carlo system is the difference between the fit result and the
known MC input. Any fitting bias would show up in the 

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-27 Thread Bob kb8tq
Hi

> On Nov 27, 2017, at 12:33 AM, Ralph Devoe  wrote:
> 
> Here's a short reply to the comments of Bob, Attila, Magnus, and others.
> Thanks for reading the paper carefully. I appreciate it. Some of the
> comments are quite interesting, other seem off the mark. Let's start with
> an interesting one:
> 
> The issue I intended to raise, but which I'm not sure I stated clearly
> enough, is a conjecture: Is least-square fitting as efficient as any of the
> other direct-digital or SDR techniques? Is the resolution of any
> direct-digital system limited by (a) the effective number of bits of the
> ADC and (b) the number of samples averaged? Thanks to Attila for reminding
> me of the Sherman and Joerdens paper, which I have not read carefully
> before. In their appendix Eq. A6 they derive a result which may or may not
> be related to Eq. 6 in my paper. If the conjecture is true then the SDR
> technique must be viewed as one several equivalent algorithms for
> estimating phase. Note that the time deviation for a single ADC channel in
> the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
> This suggests that the conjecture is true.
> 
> Other criticisms seem off the mark:
> 
> Several people raised the question of the filter factor of the least-square
> fit.  First, if there is a filtering bias due to the fit, it would be the
> same for signal and reference channels and should cancel.

Errr … no.

There are earlier posts about this on the list. The *objective* of ADEV is to 
capture
noise. Any filtering process rejects noise. That is true in DMTD and all the 
other approaches.
Presentations made in papers since the 1970’s demonstrate that it very much 
does 
not cancel out or drop out. It impacts the number you get for ADEV. You have 
thrown away
part of what you set out to measure. 

Yes, ADEV is a bit fussy in this regard. Many of the other “DEV” measurements 
are also 
fussy. This is at the heart of why many counters (when they estimate frequency) 
can not 
be used directly for ADEV. Any technique that is proposed for ADEV needs to be 
analyzed. 

The point here is not that filtering makes the measurement invalid. The point 
is that the
filter’s impact needs to be evaluated and stated. That is the key part of the 
proposed technique 
that is missing at this point. 

Bob


> Second, even if
> there is a bias, it would have to fluctuate from second to second to cause
> a frequency error. Third, the Monte Carlo results show no bias. The output
> of the Monte Carlo system is the difference between the fit result and the
> known MC input. Any fitting bias would show up in the difference, but there
> is none.
> 
> Attila says that I exaggerate the difficulty of programming an FPGA. Not
> so. At work we give experts 1-6 months for a new FPGA design. We recently
> ported some code from a Spartan 3 to a Spartan 6. Months of debugging
> followed. FPGA's will always be faster and more computationally efficient
> than Python, but Python is fast enough. The motivation for this experiment
> was to use a high-level language (Python) and preexisting firmware and
> software (Digilent) so that the device could be set up and reconfigured
> easily, leaving more time to think about the important issues.
> 
> Attila has about a dozen criticisms of the theory section, mostly that it
> is not rigorous enough and there are many assumptions. But it is not
> intended to be rigorous. This is primarily an experimental paper and the
> purpose of the theory is to give a simple physical picture of the
> surprizingly good results. It does that, and the experimental results
> support the conjecture above.
> The limitations of the theory are discussed in detail on p. 6 where it is
> called "... a convenient approximation.." Despite this the theory agrees
> with the Monte Carlo over most of parameter space, and where it does not is
> discussed in the text.
> 
> About units: I'm a physicist and normally use c.g.s units for
> electromagnetic calculations. The paper was submitted to Rev. Sci. Instr.
> which is an APS journal. The APS has no restrictions on units at all.
> Obviously for clarity I should put them in SI units when possible.
> 
> Ralph
> KM6IYN
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] Allan variance by sine-wave fitting

2017-11-26 Thread Ralph Devoe
Here's a short reply to the comments of Bob, Attila, Magnus, and others.
Thanks for reading the paper carefully. I appreciate it. Some of the
comments are quite interesting, other seem off the mark. Let's start with
an interesting one:

The issue I intended to raise, but which I'm not sure I stated clearly
enough, is a conjecture: Is least-square fitting as efficient as any of the
other direct-digital or SDR techniques? Is the resolution of any
direct-digital system limited by (a) the effective number of bits of the
ADC and (b) the number of samples averaged? Thanks to Attila for reminding
me of the Sherman and Joerdens paper, which I have not read carefully
before. In their appendix Eq. A6 they derive a result which may or may not
be related to Eq. 6 in my paper. If the conjecture is true then the SDR
technique must be viewed as one several equivalent algorithms for
estimating phase. Note that the time deviation for a single ADC channel in
the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
This suggests that the conjecture is true.

Other criticisms seem off the mark:

Several people raised the question of the filter factor of the least-square
fit.  First, if there is a filtering bias due to the fit, it would be the
same for signal and reference channels and should cancel. Second, even if
there is a bias, it would have to fluctuate from second to second to cause
a frequency error. Third, the Monte Carlo results show no bias. The output
of the Monte Carlo system is the difference between the fit result and the
known MC input. Any fitting bias would show up in the difference, but there
is none.

Attila says that I exaggerate the difficulty of programming an FPGA. Not
so. At work we give experts 1-6 months for a new FPGA design. We recently
ported some code from a Spartan 3 to a Spartan 6. Months of debugging
followed. FPGA's will always be faster and more computationally efficient
than Python, but Python is fast enough. The motivation for this experiment
was to use a high-level language (Python) and preexisting firmware and
software (Digilent) so that the device could be set up and reconfigured
easily, leaving more time to think about the important issues.

Attila has about a dozen criticisms of the theory section, mostly that it
is not rigorous enough and there are many assumptions. But it is not
intended to be rigorous. This is primarily an experimental paper and the
purpose of the theory is to give a simple physical picture of the
surprizingly good results. It does that, and the experimental results
support the conjecture above.
The limitations of the theory are discussed in detail on p. 6 where it is
called "... a convenient approximation.." Despite this the theory agrees
with the Monte Carlo over most of parameter space, and where it does not is
discussed in the text.

About units: I'm a physicist and normally use c.g.s units for
electromagnetic calculations. The paper was submitted to Rev. Sci. Instr.
which is an APS journal. The APS has no restrictions on units at all.
Obviously for clarity I should put them in SI units when possible.

Ralph
KM6IYN
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-23 Thread d . schuecker
The harmonics limit the perfomance if you want to find the frequency of a
given, sampled sinewave with linear methods. Thats at least my finding when
I built a device to measure grid frequency fast and with high accuracy. I
had to use high-Q digital filters for the fundamental. Their slow transient
response limited the speed of new frequency measurements.

Cheers
Detlef
DD4WV

"time-nuts" <time-nuts-boun...@febo.com> schrieb am 23.11.2017 16:34:39:

> Von: Tim Shoppa <tsho...@gmail.com>
> An: Discussion of precise time and frequency measurement
<time-nuts@febo.com>
> Datum: 23.11.2017 16:35
> Betreff: Re: [time-nuts] Allan variance by sine-wave fitting
> Gesendet von: "time-nuts" <time-nuts-boun...@febo.com>
>
> I wonder how much a fitting approach is affected by distortion
(especially
> harmonic content) in the waveform.
>
> Of course we can always filter the waveform to make it more sinusoidal
but
> then we are adding L's and C's and their tempcos to the measurement for
> sure destroying any femtosecond claims.
>
> Tim N3QE
>
> On Wed, Nov 22, 2017 at 5:57 PM, Ralph Devoe <rgde...@gmail.com> wrote:
>
> > Hi,
> >The fitting routine only takes up 40 uS of the 1 sec interval
> > between measurements, as shown in Fig. 1 of the paper. This is less
than
> > 10(-4) of the measurement interval. It just determines the phase
difference
> > at the start of every second. I don't think the filtering effect is
very
> > large in this case.
> > The interesting thing is that good results are achievable with
such
> > a short fitting interval. One way to think of it is to treat the
fitting
> > routine as a statistically optimized averaging process. Fitting 40 uS,
that
> > is 4096 points at 10 ns/point,  should reduce the noise by a factor of
64
> > (roughly). The single shot timing resolution of the ADC is about 10 pS
(see
> > Fig. 4), so dividing this by 64 brings you down into the 100's of fs
range,
> > which is what you see.
> >
> > Ralph
> > ___
> > time-nuts mailing list -- time-nuts@febo.com
> > To unsubscribe, go to https://www.febo.com/cgi-bin/
> > mailman/listinfo/time-nuts
> > and follow the instructions there.
> >
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-23 Thread Tim Shoppa
I wonder how much a fitting approach is affected by distortion (especially
harmonic content) in the waveform.

Of course we can always filter the waveform to make it more sinusoidal but
then we are adding L's and C's and their tempcos to the measurement for
sure destroying any femtosecond claims.

Tim N3QE

On Wed, Nov 22, 2017 at 5:57 PM, Ralph Devoe  wrote:

> Hi,
>The fitting routine only takes up 40 uS of the 1 sec interval
> between measurements, as shown in Fig. 1 of the paper. This is less than
> 10(-4) of the measurement interval. It just determines the phase difference
> at the start of every second. I don't think the filtering effect is very
> large in this case.
> The interesting thing is that good results are achievable with such
> a short fitting interval. One way to think of it is to treat the fitting
> routine as a statistically optimized averaging process. Fitting 40 uS, that
> is 4096 points at 10 ns/point,  should reduce the noise by a factor of 64
> (roughly). The single shot timing resolution of the ADC is about 10 pS (see
> Fig. 4), so dividing this by 64 brings you down into the 100's of fs range,
> which is what you see.
>
> Ralph
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-23 Thread Magnus Danielson

Hi,

There is trivial ways to estimate phase and amplitude of a sine using 
linear methods. I saw however none of these properly referenced or 
described. It would have been good to see those approaches attempted in 
parallel on the same data and compare their performance with the 
proposed approach. It seemed "fuzzy" how it worked, and that is never a 
good sign in a scientific article, especially as it is n the heart of 
the method described in the paper. The actual method should be named, 
referenced and then also referenced with "as implemented by..." and we 
only got the last part.


Cheers,
Magnus

On 11/23/2017 01:34 PM, d.schuec...@avm.de wrote:

Hi,

just my two cents on sine wave fitting.

A undamped sine wave is the solution of the difference equation

sig(n+1)=2*cos(w)*sig(n)-sig(n-1)

This is a linear system of equations mapping the sum of the samples n+1 and
n-1 to the sample n. The factor 2*cos(w) is the unknown. The least-squares
solution of the overdetermined system is pure linear algebra, no nonlinear
fitting involved. The trick also works for a damped sine wave. Care must be
taken for high 'oversampling' rates, it works best for 4samples/sinewave,
ie near Nyquist/2.

Cheers
Detlef
DD4WV



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-23 Thread Bob kb8tq
Hi

One other side to this: 

There are a number of papers out there on this basic technique (ADC frequency
measurement). There are a number of commercial products that do ADEV
and other measurements this way. It might be a good idea to at least mention 
them. It would be even better to look at the sort of accuracy they achieve. 

Bob

> On Nov 22, 2017, at 6:38 PM, Ralph Devoe  wrote:
> 
> Hi Time-nuts and Attila,
> Thanks for the very interesting and informative criticisms. That
> is what I was looking for.  I don't agree with most of them, but I need
> some time to work out some detailed answers.
>  To focus on the forest instead of the trees:  The method uses a
> $300 student scope (Digilent Analog discovery- a very fine product), which
> any skilled amateur can modify in a weekend, and produce a device which is
> 10-100 times better than the expensive counters we are used to using.  The
> software contains only 125 lines of Python and  pretty much anyone can
> write their own. In practice this device is much easier to use than my
> 53132a.
> 
> Ralph
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-23 Thread d . schuecker
Hi,

just my two cents on sine wave fitting.

A undamped sine wave is the solution of the difference equation

sig(n+1)=2*cos(w)*sig(n)-sig(n-1)

This is a linear system of equations mapping the sum of the samples n+1 and
n-1 to the sample n. The factor 2*cos(w) is the unknown. The least-squares
solution of the overdetermined system is pure linear algebra, no nonlinear
fitting involved. The trick also works for a damped sine wave. Care must be
taken for high 'oversampling' rates, it works best for 4samples/sinewave,
ie near Nyquist/2.

Cheers
Detlef
DD4WV



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Bob kb8tq
Hi

To paraphrase a post made a few months back: 

With ADEV, the “signal” is the noise. A lot of the things you would normally 
do to improve the SNR can get in the way with ADEV measurements. That’s
not to say they invalidate the measure, they can “color” the result and make
comparison between methods a bit difficult. 

Again, this conversation has been going on since the days of the original papers
on ADEV being presented. 

Bob

> On Nov 22, 2017, at 8:00 PM, Magnus Danielson  
> wrote:
> 
> Hi,
> 
> On 11/22/2017 11:57 PM, Ralph Devoe wrote:
>> Hi,
>>The fitting routine only takes up 40 uS of the 1 sec interval
>> between measurements, as shown in Fig. 1 of the paper. This is less than
>> 10(-4) of the measurement interval. It just determines the phase difference
>> at the start of every second. I don't think the filtering effect is very
>> large in this case.
> 
> Ok, this is where you have to learn one of the basic lessons on ADEV.
> For white and flicker phase noise, you must always indicate the bandwidth of 
> the channel. The filtering is there and you need to care.
> It's not necessarily wrong to filter, quite the opposite, but the bandwidth 
> needs to be shown.
> 
> The reason is that you need the bandwidth to relate it to the noise level of 
> that noise, which is the point of ADEV to begin with.
> 
>> The interesting thing is that good results are achievable with such
>> a short fitting interval. One way to think of it is to treat the fitting
>> routine as a statistically optimized averaging process. Fitting 40 uS, that
>> is 4096 points at 10 ns/point,  should reduce the noise by a factor of 64
>> (roughly). The single shot timing resolution of the ADC is about 10 pS (see
>> Fig. 4), so dividing this by 64 brings you down into the 100's of fs range,
>> which is what you see.
> 
> Clean up your units. You neither mean to say 40 microsiemens nor 40 
> microsamples, which is the two ways to properly interpent "40 uS" assuming u 
> is shorthand for micro. Papers need to follow SI units, so follow the SI 
> brochure from the BIPM, it's available for free download, so there is no 
> excuse. Attilas comments on units is relevant tough love.
> 
> Secondly, be extremely careful about such assumptions on what a "statistcally 
> optimized averaging process" does. We have noises in this field which cause 
> most assumptions of traditional textbooks completely useless. As you reach 
> noise being not white phase noise, convergence rules no longer apply, we even 
> talk about non-convergent noises. This is why RMS estimator had to be 
> replaced with Allan deviation in the first place. I made sure to provide 
> plenty of references and explanations in the Allan Deviation Wikipedia 
> article.
> 
> Cheers,
> Magnus
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Magnus Danielson

Hi Ralph,

On 11/23/2017 12:38 AM, Ralph Devoe wrote:

Hi Time-nuts and Attila,
  Thanks for the very interesting and informative criticisms. That
is what I was looking for.  I don't agree with most of them, but I need
some time to work out some detailed answers.


There is some tough love here, but learned the hard way many times.
Some of it will take time to grasp and accept, but so far all I've seen 
have been constructive criticism.



   To focus on the forest instead of the trees:  The method uses a
$300 student scope (Digilent Analog discovery- a very fine product), which
any skilled amateur can modify in a weekend, and produce a device which is
10-100 times better than the expensive counters we are used to using.  The
software contains only 125 lines of Python and  pretty much anyone can
write their own. In practice this device is much easier to use than my
53132a.


There exist several works that have used SDR approaches to measure 
phase. It is not new and unknown. The Microsemi phase-noise and 
stability test-sets is good examples.


This is not to say that there is no value in looking at what can be 
achieved.


What you do is just a variant of heterodyne receiver, and essentially a 
DMTD using ADCs and phase estimation using least square filtering. It 
just takes a few re-drawing stages to show that.


You would need to show how the phase estimation is actually done on that 
signal, the given explanation is not very helpful as it sketches the 
processing and hands of to some library. This crucial point should be 
handled with more care and detail. Using such a least-square fit, I want 
to understand how the phase is estimated in such non-linear function.


Also, you should compare to I/Q demod, arctan and a truly linear least 
square.


Already the title is interesting, since it is really not Allan Variance 
measurement as such but phase measurement, which then can be used to 
produce Allan Deviation measures.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Magnus Danielson

Hi,

On 11/22/2017 11:57 PM, Ralph Devoe wrote:

Hi,
The fitting routine only takes up 40 uS of the 1 sec interval
between measurements, as shown in Fig. 1 of the paper. This is less than
10(-4) of the measurement interval. It just determines the phase difference
at the start of every second. I don't think the filtering effect is very
large in this case.


Ok, this is where you have to learn one of the basic lessons on ADEV.
For white and flicker phase noise, you must always indicate the 
bandwidth of the channel. The filtering is there and you need to care.
It's not necessarily wrong to filter, quite the opposite, but the 
bandwidth needs to be shown.


The reason is that you need the bandwidth to relate it to the noise 
level of that noise, which is the point of ADEV to begin with.



 The interesting thing is that good results are achievable with such
a short fitting interval. One way to think of it is to treat the fitting
routine as a statistically optimized averaging process. Fitting 40 uS, that
is 4096 points at 10 ns/point,  should reduce the noise by a factor of 64
(roughly). The single shot timing resolution of the ADC is about 10 pS (see
Fig. 4), so dividing this by 64 brings you down into the 100's of fs range,
which is what you see.


Clean up your units. You neither mean to say 40 microsiemens nor 40 
microsamples, which is the two ways to properly interpent "40 uS" 
assuming u is shorthand for micro. Papers need to follow SI units, so 
follow the SI brochure from the BIPM, it's available for free download, 
so there is no excuse. Attilas comments on units is relevant tough love.


Secondly, be extremely careful about such assumptions on what a 
"statistcally optimized averaging process" does. We have noises in this 
field which cause most assumptions of traditional textbooks completely 
useless. As you reach noise being not white phase noise, convergence 
rules no longer apply, we even talk about non-convergent noises. This is 
why RMS estimator had to be replaced with Allan deviation in the first 
place. I made sure to provide plenty of references and explanations in 
the Allan Deviation Wikipedia article.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Attila Kinali
On Wed, 22 Nov 2017 15:38:33 -0800
Ralph Devoe  wrote:

> To focus on the forest instead of the trees:  The method uses a
> $300 student scope (Digilent Analog discovery- a very fine product), which
> any skilled amateur can modify in a weekend, and produce a device which is
> 10-100 times better than the expensive counters we are used to using.  The
> software contains only 125 lines of Python and  pretty much anyone can
> write their own. In practice this device is much easier to use than my
> 53132a.

That's true. Such a system is increadibly easy to use.
But you can as well go for something like the redpitaya.
Because its architecture is ment for continuous sampling
you can do all the fancy stuff that Sherman and Jördens
did. And thanks to GnuRadio support, you don't even have
to write python for it, but can just click it together using
the graphical interface. How is that for simplicity? :-)
An I know someone from NIST is actually working on making
a full featured phase noise/stability measurement setup
out of a redpitaya.

Oh.. and what I forgot to mention: I know how difficult it is to
do a proper uncertainty and nooise analysis of least-squares of sines.
I tried it last year and failed. There is lots of math involved that
I haven't mastered yet.


Attila Kinali


-- 
You know, the very powerful and the very stupid have one thing in common.
They don't alters their views to fit the facts, they alter the facts to
fit the views, which can be uncomfortable if you happen to be one of the
facts that needs altering.  -- The Doctor
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Jerry Hancock
Ralph, did I miss something or didn’t you use a pair of the Discovery boards?

> On Nov 22, 2017, at 3:38 PM, Ralph Devoe  wrote:
> 
> Hi Time-nuts and Attila,
> Thanks for the very interesting and informative criticisms. That
> is what I was looking for.  I don't agree with most of them, but I need
> some time to work out some detailed answers.
>  To focus on the forest instead of the trees:  The method uses a
> $300 student scope (Digilent Analog discovery- a very fine product), which
> any skilled amateur can modify in a weekend, and produce a device which is
> 10-100 times better than the expensive counters we are used to using.  The
> software contains only 125 lines of Python and  pretty much anyone can
> write their own. In practice this device is much easier to use than my
> 53132a.
> 
> Ralph
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Ralph Devoe
Hi Time-nuts and Attila,
 Thanks for the very interesting and informative criticisms. That
is what I was looking for.  I don't agree with most of them, but I need
some time to work out some detailed answers.
  To focus on the forest instead of the trees:  The method uses a
$300 student scope (Digilent Analog discovery- a very fine product), which
any skilled amateur can modify in a weekend, and produce a device which is
10-100 times better than the expensive counters we are used to using.  The
software contains only 125 lines of Python and  pretty much anyone can
write their own. In practice this device is much easier to use than my
53132a.

Ralph
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Ralph Devoe
Hi,
   The fitting routine only takes up 40 uS of the 1 sec interval
between measurements, as shown in Fig. 1 of the paper. This is less than
10(-4) of the measurement interval. It just determines the phase difference
at the start of every second. I don't think the filtering effect is very
large in this case.
The interesting thing is that good results are achievable with such
a short fitting interval. One way to think of it is to treat the fitting
routine as a statistically optimized averaging process. Fitting 40 uS, that
is 4096 points at 10 ns/point,  should reduce the noise by a factor of 64
(roughly). The single shot timing resolution of the ADC is about 10 pS (see
Fig. 4), so dividing this by 64 brings you down into the 100's of fs range,
which is what you see.

Ralph
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Attila Kinali
Hi,

On Wed, 22 Nov 2017 07:58:19 -0800
Ralph Devoe  wrote:

>   I've been working on a simple, low-cost, direct-digital method for
> measuring the Allan variance of frequency standards. It's based on a
> Digilent oscilloscope (Analog Discovery, <$300) and uses a short Python
> routine to get a resolution of 3 x 10(-13) in one second. This corresponds
> to a noise level of 300 fs, one or two orders of magnitude better than a
> typical counter. The details are in a paper submitted to the Review of
> Scientific Instruments and posted at arXiv:1711.07917 .

I guess, you are asking for comments, so here are some:

> The DD method has minimal dependence on analog circuit issues, but
> requires substantial hardware and firmware to perform the I/Q
> demodulation, synthesize the local oscillators, and process the
> data through the digital logic.

This is not true. The logic needed for the mixing process is minimal
and easily done [1]. It is actually so fast that you can do it on
your PC in realtime as well. Ie you could just pipe those 100Msps
your system gets into your PC, and do the DD method there. Though
that would be a huge waste of resources. It's much more efficient
to have the tiny bit of logic in an FPGA (which you need anyways
to interface the ADC to the PC) and do the down-conversion and
decimation there.

Additionally, your method has to synthesize a local signal as well
and not just once, but many times until a fit is found.

> The limiting ADEV of that method is comparable to that of DMTD systems,
> with commercial devices specifying ADEV of 10^-13 to 10^-15 for t = 1 sec[10].

An order of magnitude is not comparable. It's a huge difference in this
area. Also, I am surprised you haven't referenced [2], which has been
discussed a few times here on the time-nuts mailinglist. The SDR method
Sherman and Jördens used is, as far as I am aware of, the current record
holders in terms of phase difference measurement precision.


> The resolution derives from the averaging inherent in a least-square fit:
> a fit of O(10^4) points will enhance the resolution for stationary random
> noise processes by a factor O(10^2). 

This is a very cumbersome way of writing that if you average over
a Gaussian i.i.d random variable, the standard deviation will go
down with square root of the number of samples. 

Please note here that you have the inherent assumption that your
random variable is Gauss i.i.d. This assumption is _not_ true
in general and especially not in high precision measurments.
You need to state explicitly why you think your random variable (noise)
is i.i.d and need to verify it later.

Also, writing O(10^4) is not how O-notation works. See [3] for details.
People I work with would not consider this not just sloppy notation,
but plain wrong. So please be careful about it.


> Since the fit phase depends on the values of the signal over many cycles,
> it is insensitive to artifacts near the zero-crossing and to dc offsets.

This is not true. The quality of the least-square fit of a sinusoid
depends to quite big extend on the correct estimation of the DC offset.
Hence low frequency noise on the signal will detoriate the quality of
the fit.


> Data analysis was performed by a Python routine which read each file and
> performed a least-squares fit of the unknown and reference signals to a
> function of the form
>   S(t) = A sin (2*pi*f0 t + phi) + epsilon 
> where A is the amplitude, pi the phase, f0 the frequency, and epsilon the
> DC offset. The fitting routine used was the curve fit method of the
> scipy.optimize library. The fit finds the values of these parameters which
> minimize the residual R defined by
>  R =1/N*sum^N_i=0(|D(i)-S(i*t_c)|^2)


You are doing a linear least squares fit over a non-linear function.
Yes, one can do that, but it will not going to be good. Your objective
function is definitely not well chosen for the task at hand. Due to the
way how LS works, the estimate quality for the amplitude will have an
effect on the estimate quality of your phase. Or in other words,
amplitude noise will result in estimation error of your phase, additional
to the phase noise you already have. I.e. you have an inherent AM noise
to PM noise conversion. You do want to change your objective function such,
that it rejects amplitude noise as much as possible. The DD and DMDT
approaches do this by the use of a phase detector. 


> The inset shows a histogram of the data which follows a Gaussian
> distribution with a standard deviation sigma of 220 fs. This is somewhat
> greater than the timing jitter specified for this ADC which is about
> ~= 140 fs. Note that during this run, which lasted for 1000 sec, the
> phase of each channel varied over the full 100 nS period of
> the inputs, while their difference had a sigma = 220 fs, a ratio of
> 3e-6


Note that the aperture jitter for the ADC is an _absolute_ 
value, which is larger than the relative aperture 

Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Magnus Danielson

Hi,

Sure, fitting is a filtering process. The least square estimation is 
really a filtering with a ramp-like response in phase and parabolic in 
frequency.


The critical aspect here is over which length the filtering occurs and 
thus the resulting bandwidth. If you only filters down white phase 
noise, this is good and the bandwidth of the filter should classically 
be mentioned.


Few people know the resulting noise bandwidth of their estimator filter.

The estimator should never overlap another sample, then it becomes, Hrm, 
problematic.


I've not had time to even download Ralphs paper, so I will have to come 
back to it after reviewing it.


Cheers,
Magnus

On 11/22/2017 05:19 PM, Bob kb8tq wrote:

Hi

The “risk” with any fitting process is that it can act as a filter. Fitting a 
single
sine wave “edge” to find a zero is not going to be much of a filter. It will not
impact 1 second ADEV much at all. Fitting every “edge” for the entire second
*will* act as a lowpass filter with a fairly low cutoff frequency. That *will* 
impact
the ADEV.

Obviously there is a compromise that gets made in a practical measurement.
As the number of samples goes up, your fit gets better. At 80us you appear
to have a pretty good dataset. Working out just what the “filtering” impact
is at shorter tau is not a simple task.

Indeed this conversation has been going on for as long as anybody has been
presenting ADEV papers. I first ran into it in the early 1970’s. It is at the 
heart
of recent work recommending a specific filtering process be used.

Bob


On Nov 22, 2017, at 10:58 AM, Ralph Devoe  wrote:

Hi time nuts,
  I've been working on a simple, low-cost, direct-digital method for
measuring the Allan variance of frequency standards. It's based on a
Digilent oscilloscope (Analog Discovery, <$300) and uses a short Python
routine to get a resolution of 3 x 10(-13) in one second. This corresponds
to a noise level of 300 fs, one or two orders of magnitude better than a
typical counter. The details are in a paper submitted to the Review of
Scientific Instruments and posted at arXiv:1711.07917 .
  The method uses least-squares fitting of a sine wave to determine the
relative phase of the signal and reference. There is no zero-crossing
detector. It only works for sine waves and doesn't compute the phase noise
spectral density. I've enclosed a screen-shot of the Python output,
recording the frequency difference of two FTS-1050a standards at 1 second
intervals. The second column gives the difference in milliHertz and one can
see that all the measurements are within about +/- 20 microHertz, or 2 x
10(-12) of each other, with a sigma much less than this.
  It would interesting to compare this approach to other direct-digital
devices.

Ralph DeVoe
KM6IYN
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread djl
Forgot to add I think this can be proved mathematically. It's been a 
long time

Don

On 2017-11-22 12:52, djl wrote:

You have it right, Bob. fitting is essentially a narrow band filter
process.  Fitting thus has essentially the same errors.
Don

On 2017-11-22 09:19, Bob kb8tq wrote:

Hi

The “risk” with any fitting process is that it can act as a filter.
Fitting a single
sine wave “edge” to find a zero is not going to be much of a filter. 
It will not
impact 1 second ADEV much at all. Fitting every “edge” for the entire 
second

*will* act as a lowpass filter with a fairly low cutoff frequency.
That *will* impact
the ADEV.

Obviously there is a compromise that gets made in a practical 
measurement.
As the number of samples goes up, your fit gets better. At 80us you 
appear
to have a pretty good dataset. Working out just what the “filtering” 
impact

is at shorter tau is not a simple task.

Indeed this conversation has been going on for as long as anybody has 
been

presenting ADEV papers. I first ran into it in the early 1970’s. It is
at the heart
of recent work recommending a specific filtering process be used.

Bob


On Nov 22, 2017, at 10:58 AM, Ralph Devoe  wrote:

Hi time nuts,
 I've been working on a simple, low-cost, direct-digital method 
for

measuring the Allan variance of frequency standards. It's based on a
Digilent oscilloscope (Analog Discovery, <$300) and uses a short 
Python
routine to get a resolution of 3 x 10(-13) in one second. This 
corresponds
to a noise level of 300 fs, one or two orders of magnitude better 
than a
typical counter. The details are in a paper submitted to the Review 
of

Scientific Instruments and posted at arXiv:1711.07917 .
 The method uses least-squares fitting of a sine wave to 
determine the

relative phase of the signal and reference. There is no zero-crossing
detector. It only works for sine waves and doesn't compute the phase 
noise

spectral density. I've enclosed a screen-shot of the Python output,
recording the frequency difference of two FTS-1050a standards at 1 
second
intervals. The second column gives the difference in milliHertz and 
one can
see that all the measurements are within about +/- 20 microHertz, or 
2 x

10(-12) of each other, with a sigma much less than this.
 It would interesting to compare this approach to other 
direct-digital

devices.

Ralph DeVoe
KM6IYN
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.


--
Dr. Don Latham
PO Box 404, Frenchtown, MT, 59834
VOX: 406-626-4304

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread djl
You have it right, Bob. fitting is essentially a narrow band filter 
process.  Fitting thus has essentially the same errors.

Don

On 2017-11-22 09:19, Bob kb8tq wrote:

Hi

The “risk” with any fitting process is that it can act as a filter.
Fitting a single
sine wave “edge” to find a zero is not going to be much of a filter. It 
will not
impact 1 second ADEV much at all. Fitting every “edge” for the entire 
second

*will* act as a lowpass filter with a fairly low cutoff frequency.
That *will* impact
the ADEV.

Obviously there is a compromise that gets made in a practical 
measurement.
As the number of samples goes up, your fit gets better. At 80us you 
appear
to have a pretty good dataset. Working out just what the “filtering” 
impact

is at shorter tau is not a simple task.

Indeed this conversation has been going on for as long as anybody has 
been

presenting ADEV papers. I first ran into it in the early 1970’s. It is
at the heart
of recent work recommending a specific filtering process be used.

Bob


On Nov 22, 2017, at 10:58 AM, Ralph Devoe  wrote:

Hi time nuts,
 I've been working on a simple, low-cost, direct-digital method 
for

measuring the Allan variance of frequency standards. It's based on a
Digilent oscilloscope (Analog Discovery, <$300) and uses a short 
Python
routine to get a resolution of 3 x 10(-13) in one second. This 
corresponds
to a noise level of 300 fs, one or two orders of magnitude better than 
a

typical counter. The details are in a paper submitted to the Review of
Scientific Instruments and posted at arXiv:1711.07917 .
 The method uses least-squares fitting of a sine wave to determine 
the

relative phase of the signal and reference. There is no zero-crossing
detector. It only works for sine waves and doesn't compute the phase 
noise

spectral density. I've enclosed a screen-shot of the Python output,
recording the frequency difference of two FTS-1050a standards at 1 
second
intervals. The second column gives the difference in milliHertz and 
one can
see that all the measurements are within about +/- 20 microHertz, or 2 
x

10(-12) of each other, with a sigma much less than this.
 It would interesting to compare this approach to other 
direct-digital

devices.

Ralph DeVoe
KM6IYN
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.


--
Dr. Don Latham
PO Box 404, Frenchtown, MT, 59834
VOX: 406-626-4304

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Allan variance by sine-wave fitting

2017-11-22 Thread Bob kb8tq
Hi

The “risk” with any fitting process is that it can act as a filter. Fitting a 
single
sine wave “edge” to find a zero is not going to be much of a filter. It will not
impact 1 second ADEV much at all. Fitting every “edge” for the entire second 
*will* act as a lowpass filter with a fairly low cutoff frequency. That *will* 
impact
the ADEV.

Obviously there is a compromise that gets made in a practical measurement.
As the number of samples goes up, your fit gets better. At 80us you appear
to have a pretty good dataset. Working out just what the “filtering” impact 
is at shorter tau is not a simple task. 

Indeed this conversation has been going on for as long as anybody has been 
presenting ADEV papers. I first ran into it in the early 1970’s. It is at the 
heart 
of recent work recommending a specific filtering process be used. 

Bob

> On Nov 22, 2017, at 10:58 AM, Ralph Devoe  wrote:
> 
> Hi time nuts,
>  I've been working on a simple, low-cost, direct-digital method for
> measuring the Allan variance of frequency standards. It's based on a
> Digilent oscilloscope (Analog Discovery, <$300) and uses a short Python
> routine to get a resolution of 3 x 10(-13) in one second. This corresponds
> to a noise level of 300 fs, one or two orders of magnitude better than a
> typical counter. The details are in a paper submitted to the Review of
> Scientific Instruments and posted at arXiv:1711.07917 .
>  The method uses least-squares fitting of a sine wave to determine the
> relative phase of the signal and reference. There is no zero-crossing
> detector. It only works for sine waves and doesn't compute the phase noise
> spectral density. I've enclosed a screen-shot of the Python output,
> recording the frequency difference of two FTS-1050a standards at 1 second
> intervals. The second column gives the difference in milliHertz and one can
> see that all the measurements are within about +/- 20 microHertz, or 2 x
> 10(-12) of each other, with a sigma much less than this.
>  It would interesting to compare this approach to other direct-digital
> devices.
> 
> Ralph DeVoe
> KM6IYN
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.