[time-nuts] Re: 32.768Khz Crystal Trimming

2022-05-04 Thread Dan Kemppainen

Rick,

Very delayed response...

So, your thought/comment about tempco stuck in the back of my head. This 
little board has progressed to the point where it can measure it's own 
RTCC oscillator against and external 10Mhz ref.


Out of curiosity a very simple oven was slapped together (Leftover 
packaging Styrofoam, a power resistor, aluminum block, masking tape, 
etc.) and the board was cycled up and down in temperature a few times. 
Temp was measured with an onboard temp sensor.


Anyway, assuming the sign convention is correct, here's a quick plot of 
ppm error vs. temperature. It swings about two ppm over the temperature 
range, with a turnover just under 30C. A second order polynomial fit 
appears to do the job nicely (red line).


Nothing stellar of note in this data, and it's probably old hat to many 
here, but still a fun little side experiment.


Dan



On 4/2/2022 3:27 AM, time-nuts-requ...@lists.febo.com wrote:

Subject:
[time-nuts] Re: 32.768Khz Crystal Trimming
From:
"Richard (Rick) Karlquist" 
Date:
4/1/2022, 12:34 PM


No one mentioned tempco, so I will.  Ideally you should do your
calibration at a temperature corresponding to the long term
average in your workshop.  If the crystal is in a piece of
equipment with a temperate rise, it should be accounted for,
and then going forward you have to leave the equipment powered
up 24/7.  The crystal is probably a tuning fork, meaning it
won't be AT cut.  It may have a substantial tempco around
room temp.  In which case that old time-nuts insult may apply:

"congratulations, nice thermometer."

Rick N6RK

BTW, I go back 48 years with crystals.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-04 Thread Bob kb8tq
Hi

The most basic is the “phase pop” that is not modeled by any of the 
normal noise formulas. The further you dig in, the more you find things
that the models really don’t cover. 

Bob

> On May 4, 2022, at 11:50 AM, Attila Kinali  wrote:
> 
> Hoi Bob,
> 
> On Tue, 3 May 2022 16:23:27 -0500
> Bob kb8tq  wrote:
> 
>> The gotcha is that there are a number of very normal OCXO “behaviors” that 
>> are not
>> covered by any of the standard statistical models. 
> 
> Could you elaborate a bit on what these "normal behaviours" are?
> 
>   Attila Kinali
> 
> -- 
> The driving force behind research is the question: "Why?"
> There are things we don't understand and things we always 
> wonder about. And that's why we do research.
>   -- Kobayashi Makoto
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
> email to time-nuts-le...@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-04 Thread Hal Murray


att...@kinali.ch said:
> FFT based systems take a white, normal distributed noise source, Fourier
> transform it, filter it in frequency domain and transform it back. Runtime is
> dominated by the FFT and thus O(n*log(n)). There was a nice paper by either
> Barnes or Greenhall (or both?) on this, which I seem currently unable to
> find. This is also the method employed by the bruiteur tool from sigma-theta.

> Biggest disadvantage of this method is, that it operates on the whole sample
> length multiple times. I.e it becomes slow very quickly, especially when the
> whole sample length is larger than main memory. But they deliver exact
> results with exactly the spectrum / time-correlation you want. 

What sort of times and memory are interesting?

You can rent a cloud server with a few hundred gigabytes of memory for a few 
$/hour.




-- 
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-04 Thread Matthias Welwarsky
Magnus, Attila, Bob,

thanks again for the inspirational posts, truly appreciated.

However. I'm looking for something reasonably simple just for the purpose of 
GPSDO simulation. Here, most of the finer details of noise are not very 
relevant. I don't really care for PSD, for example. What I'm looking for is a 
tool that can produce a phase vector that just resembles what a real 
oscillator is doing, looking from afar, with a little squinting. For example:

synth_osc(N, -1e-8, 2.5e-11, 2e-11, 0, 0);

This gives me a vector that, as far as Allan deviation is concerned, looks 
remarkably like an LPRO-101. With some other parameters I can produce a 
credible resemblance to a PRS10. Add a bit of temperature wiggle and it's 
enough to run it through the simulator and tune parameters. The finer details 
are anyway completely lost on a GPSDO. Reaction to transients, especially from 
GPS, are much more interesting, which is why the logical next step is to 
produce a GPS (or GNSS) phase vector that can be parametrized and spiked with 
some oddities to see how different control loop parameters influence the 
output. But for that I don't have an immediate need, the GPS data files on 
Leapsecond.com are enough for now.

Regards,
Matthias

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:
> Dear Matthias,
> 
> On 2022-05-03 10:57, Matthias Welwarsky wrote:
> > Dear all,
> > 
> > thanks for your kind comments, corrections and suggestions. Please forgive
> > if I don't reply to all of your comments individually. Summary response
> > follows:
> > 
> > Attila - yes, I realize temperature dependence is one key parameter. I
> > model this meanwhile as a frequency shift over time.
> > 
> > Bob - I agree in principle, real world data is a good reality check for
> > any
> > model, but there are only so few datasets available and most of the time
> > they don't contain associated environmental data. You get a mix of
> > effects without any chance to isolate them.
> 
> Environmental effects tends to be recognizeable by their periodic
> behavior, i.e. period of the day and the period of the heating/AC. Real
> oscillator data tends to be quite relevant as you can simulate what it
> would mean to lock that oscillator up. TvB made a simulator on those
> grounds. Good exercise.
> 
> > Magnus, Jim - thanks a lot. Your post encouraged me to look especially
> > into
> > flicker noise an how to generate it in the time domain. I now use randn()
> > and a low-pass filter. Also, I think I understood now how to create phase
> > vs frequency noise.
> 
> Happy to get you up to speed on that.
> 
> One particular name to check out articles for is Charles "Chuck"
> Greenhall, JPL.
> 
> For early work, also look att James "Jim" Barnes, NBS (later named NIST).
> 
> Both these fine gentlement is recommended reading almost anything they
> write on the topic actually.
> 
> > I've some Timelab screenshots attached, ADEV and frequency plot of a data
> > set I generated with the following matlab function, plus some temperature
> > response modeled outside of this function.
> > 
> > function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
> > 
> > # low-pass butterworth filter for 1/f noise generator
> > [b,a] = butter(1, 0.1);
> 
> Notice that 1/f is power-spectrum density, straight filter will give you
> 1/f^2 in power-spectrum, just as an integration slope.
> 
> One approach to flicker filter is an IIR filter with the weighing of
> 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
> "flush out" state before you use it so you have a long history to help
> shaping. For a 1024 sample series, I do 2048 samples and only use the
> last 1024. Efficient? No. Quick-and-dirty? Yes.
> 
> The pole/zero type of filters of Barnes let you synthesize an 1/f slope
> by balancing the properties. How dense and thus how small ripples you
> get, you decide. Greenhall made the point of recording the state, and
> provides BASIC code that calculate the state rather than run an infinite
> sequence to let the initial state converge to the 1/f state.
> 
> Greenhall published an article illustrating a whole range of methods to
> do it. He wrote the simulation code to be used in JPL for their clock
> development.
> 
> Flicker noise is indeed picky.
> 
> Cheers,
> Magnus
> 
> > # aging
> > phase = (((1:samples)/86400).^2)*da;
> > # white phase noise
> > phase += (randn(1, samples))*wpn;
> > # white frequency noise
> > phase += cumsum(randn(1, samples))*wfn;
> > # 1/f phase noise
> > phase += filter(b,a,randn(1,samples))*fpn;
> > # 1/f frequency noise
> > phase += cumsum(filter(b,a,randn(1,samples))*ffn);
> > 
> > end
> > 
> > osc = synth_osc(40, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
> > 
> > Thanks.
> > 
> > On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
> >> Dear all,
> >> 
> >> I'm trying to come up with a reasonably simple model for an OCXO that I
> >> can
> >> 

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-04 Thread Attila Kinali
Hoi Bob,

On Tue, 3 May 2022 16:23:27 -0500
Bob kb8tq  wrote:

> The gotcha is that there are a number of very normal OCXO “behaviors” that 
> are not
> covered by any of the standard statistical models. 

Could you elaborate a bit on what these "normal behaviours" are?

Attila Kinali

-- 
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always 
wonder about. And that's why we do research.
-- Kobayashi Makoto
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-04 Thread Attila Kinali
On Tue, 3 May 2022 08:06:22 -0700
"Lux, Jim"  wrote:

> There's some papers out there (mentioned on the list in the past) about 
> synthesizing colored noise. Taking "White" noise and running it through 
> a filter is one approach. Another is doing an inverse FFT, but that has 
> the issue of needing to know how many samples you need. Although I 
> suppose one could do some sort of continuous overlapping window scheme 
> (which, when it comes right down to it is just another way of filtering)).
> 
> https://dl.acm.org/doi/abs/10.1145/368273.368574


This one does not work with the noises relevant to oscillators.
First of all, 1/f^a-noise is not stationary, the conditional
expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0
is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t)
neither is it its value X(0) at some start time. Also, the
ensemble variance changes over time grows (iirc with t^a).
Yes, this also means that 1/f^a noise is not ergodic and thus
most of what you learned in statistics classes does not apply!
You have been warned! ;-)

Second, 1/f^a noise has singularities in its auto-covariance
for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3.
See Mandelbrot and Van Ness' work [1] on this.
 
Over the years, I've tried a few methods on generating noise
for oscillator simmulation, but only two did have any practical
value (others had some flaws somewhere that made them sub-optimal):
1) FFT based systems 
2) Filter based systems

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

Filter based approaches use some approximation using linear
filters to get the same effect. They can acheive O(n*log(n)) as
well, with O(log(n)) consumption in memory if done right [2]
(based on [3] which explains the filter in simpler terms).
Be aware that it's only an approximation and that the spectrum
will have some wiggles. Though this shouldn't be a problem
in practice. Speed is, in my experience, slightly quicker
than FFT based approaches, as less memory is touched. But
it uses a quite a bit more randomness and thus the random
number generator becomes the bottleneck. But I also have to
admit, the implementation I had for comparison was far from
well coded, so take this with a grain of salt.

There is also the way of doing fractional integration as
has been proposed by Barnes and Allan in [4]. Unfortunately,
the naive implementation of this approach leads to a O(n^2)
runtime and size O(n) memory consumption. There are faster
algorithms to do fractional integration (e.g. [5]) and
I have a few ideas of my own, but I haven't had time to try
any of them yet.

And while we are at it, another warning: rand(3) and by extension
all random number generators based on it, has a rather small
number of states. IIRC the one implemented in glibc has only 31
bits of state. While two billion states sounds like a lot, this
isn't that much for simulation of noise in oscillators. Even
algorithms that are efficient in terms of randomness consume
tens of bytes of random numbers per sample produced. If a
normal distribution is approximated by averaging samples, that
goes quickly into the hundreds of bytes. And suddenly, the
random number generator warps around after just a few million
samples. Even less in a filter based approach.

Thus, I would highly recommend using a high number of state
PRNG generator like xoroshiro1024* [6]. That the algorithm
does not pass all randomness tests with perfect score isn't as
much of an issue in this application, as long as the random
numbers are sufficiently uncorrelated (which they are).

I also recommend using an efficient Gaussian random number
generator instead of averaging. Not only does averaging
create a maximum and minimum value based on the number of
samples averaged over, it is also very slow because it uses
a lot of random numbers. A better approach is to use the
Ziggurat algorithm [7] which uses only about 72-80bit
of entropy per generated sample.

And before you ask, yes sigma-theta/bruiteur uses xoroshift1024*
and the Ziggurat algorithm ;-)

Attila Kinali

[1] Fractional Brownian Motions, Fractional Noises and Applications,
by Mandelbrot and Van Ness, 1968
http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf

[2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar
Simulation, by Brooker and Inggs, 2010