Folks,
 
I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia
 
An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be 
interpreted as there being an instability in frequency between two observations 
a second apart with a relative root mean square(RMS) value of 1.3×10−9.
 
Does this mean the observations made were at the very begining and the very end 
of the 1 second time.
If so what value about all the values in between?   What happens if the 
oscillator deviated far worse than this during the interrim.
 
 
Or does the measurement consist of making  measurements every cycle during that 
1 second and then entering all those values into a formula that accounts for 
them all??
 
Maybe a very basic tutorial on this topic would help but I cant find one

 
Signed very confused, 
Thank You
 
PauLC
W1VLF
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to