Tobias,

On 2022-04-09 18:13, Pluess, Tobias via time-nuts wrote:
Hi all,

My reply to this topic is a bit late because I have been busy with other
topics in the meantime.

To Erik Kaashoek,
You mentioned my prefilter. You are absolutely right, I looked again at my
prefilter code and decided it was garbage. I have removed the prefilter.
Thanks to your hint I also found another mistake in my PI controller code,
which led to wrong integration times being used. I corrected that and the
controller is now even more stable. I made some tests and the DAC output
changes less than before, so I guess the stability is even better!

To all others.
I discussed the topic of improving my GPSDO control loop with a colleague
off-list and he pointed out that, on this list, there was a while ago a
post about Kalman filters.
I totally forgot about this, I have looked at them at university a couple
years ago, but never used it and therefore forgot most of it. But I think
it would be interesting to try to implement the Kalman filter and compare
its performance with the PI controller I currently have.

I guess the first step before thinking about any Kalman filters is to find
the state space model of the system. I am familiar with the state space,
but one topic I was always a bit struggling with is the question which
variables I should take into account for my state. In the case of the
GPSDO, all I can observe is the phase difference between the locally
generated 1PPS and the 1PPS that comes from the GPS module. On the other
hand, I am not only interested in the phase, but also in the frequency, or,
probably, in the frequency error, so I think my state vector needs to use
the phase difference (in seconds) and the frequency error (?). So my
attempt for a GPSDO state space model is:

phi[k+1] = phi[k] - T * Delta_f[k]
Delta_f[k+1] = Delta_f[k] + K_VCO * u

y[k] = phi[k]

I see you already got some help, but hopefully this adds some different angles.

A trivial state-space Kalman would have phase and frequency. Assuming you can estimate the phase and frequency noise of both the incoming signal and the steered oscillator, it's a trivial exercise. It's recommended to befriend oneself with Kalman filters as a student exercise. It can do fairly well in various applications.

This simple system ends up being a PI-loop which is self-steered in bandwidth. Care needs to be taken to ensure that the damping constant is right. The longer one measures, the narrower bandwidth the filter has as it weighs in the next sample, and the next.

However, we know from other sources that we not only have white phase and white frequency noise in sources, both reference and steered oscillator. We also have flicker noise. That won't fit very well into the model world of a normal Kalman filter. If we know the tau of a filter, we can approximate the values from TDEV and ADEV plots, but the Kalman filter will self-tune it's time-constant to optimize it's knowledge of the state, thus sweeping over the tau plots. Now, we can make approximations to where we expect them to end up and take out suffering in a somewhat suboptimal path to that balance points.

So, you can go the Kalman path, or choose pseudo-Kalman and have separate heuristics that does coarser tuning of bandwidth on a straight PI loop. Pick and choose what fits you best. It can vary from application to application.

Keep damping high or else the Q will cause an ADEV bump.

Increasing the degree to include linear drift has the benefit of tracking in frequency drift. Regardless if this is done as a PII^2 or 3-state Kalman, care should be taken to ensure stability as it is not guaranteed that root-loci stays on the stable path. Worth doing thought.

I seem to recall a few papers in the NIST T&F archive as well as in PTTI archive on Kalman filter models for oscillator locking. The HP "SmartClock" paper should also be a good read.

A downside to Kalman filter is numerical precision issue, as core parameters is rescaled. There is a certain amount of precision loss there. I see the same problem occurring in more or less all least square and linear regression approaches I've seen. As I have been working on a different approach to least square estimation, it turns out that it is beneficial also on the numerical precision issue if you want, since the accumulation part can be made loss-less (by tossing bits on it) and only the estimation phase has issues, but since the estimation equations does not alter core accumulation it does not polute that state. Turns out there is multiple way of doing the same thing, and the classical school-book approach is not always the wisest approach.

Some time back there was some referene to papers relating to verifying performance by monitoring the noise out of the phase detector. It is applicable to a number of lock-up PLL/FLLs. I found it an interesting read. I've also looked at that state stabilize many hours in the lab. Be aware of oscillations there. Another aspect is reasolution, and quantization noise as rounding is done tends to form an oscillator of noise, depending on the loops parameters. This was a hot topic in the 70thies and there is neat graphs in the classical books. Keep an eye out for it, and for digital state, throw lots of extra bits on it as it is cheap today. As it is way below noise-floor it is not the dominant noise source in the system.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- [email protected] -- To unsubscribe send an 
email to [email protected]
To unsubscribe, go to and follow the instructions there.

Reply via email to