Hi Tobias,
Using the excellent material found here: https://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb I went ahead and implemented the first order filter as described in chapter 8. The simulation was changed to use a real recorded PPS as measurement noise and a real recorded TIC of a TCXO as the to be estimated data and a Vtune steering an added frequency
The STDEV of the PPS was 5.6e-9
The kalman filter predicts the phase and frequency fairly well
STDERR of phase is 4e-9
STDERR of freq is 1.75e-9

The loop was then closed with a PI controller tasked to minimize the phase error.
Tuning was a bit of a problem but the current performance is
STDEV phase error 1.25e-8
STDEV frequency error 1.2e-9

Unfortunately the resulting ADEV is still worse compared to a well tuned PI controller acting directly on the TIC of the TCXO.
More work to do
Erik.

On 21-4-2022 9:44, Pluess, Tobias via time-nuts wrote:
Hi Erik,

yes I also found this webpage, and the examples are good.
I had the exact same idea as you, except I thought that the model for the
VCO output is either phase only or phase and frequency. Yours is even a bit
more sophisticated, thats quite cool!
And yes, of course it will be possible to have a one dimensional filter,
even if your state space equations have more than one state. However, in
this case, A, B and C are a matrix and two vectors (sometimes also called
F, G and H). But the output that is measured ("observed") is just scalar,
i.e. 1D.
I am not sure whether your state extrapolation is correct, I don't see it
at the moment unfortunately. But I will think about it a bit more.

To your question, about how to close the loop.
Well. What do you do when you don't have the alpha-beta-gamma filter?
you take your phase error readings and feed them into some sort of
controller, for example a PID controller, do you.
Now think of the alpha-beta-gamma filter (or also the Kalman filter) as
some sort of "prefilter". (do you remember when you asked me why I
prefilter my 1PPS measurements? :-D at that time, I had a 2pole IIR filter
with adjustable time constant as prefilter). So you feed your actual 1PPS
measurements into the alpha-beta-gamma filter, and the filter outputs the
1PPS measurements, but with noise mostly removed (hopefully!). Then, use
this as the error signal for your controller.

As how to choose the alpha, beta and gamma coefficients, you need to know
the variance of your input signal. i.e. determine the standard deviation of
the 1PPS readings and square that. I am not sure at the moment how to find
the coefficients from the variance, but I believe somewhere in the
kalmanfilter.net tutorial, there is an equation for it if I remember
correctly.
Also worth mentioning is the fact that some GPS modules also output an
estimate of the timing accuracy. This can also be used as an additional
input to calculate the coefficients and adapt them, as receiving conditions
change.

does that help a bit?
I am unfortunately not yet so far as you, I have made some calculations
using matlab, but I am not sure whether they are correct. I do see some
filtering effect of the Kalman filter, though. (I used the Kalman filter,
with the error covariance matrices, not the alpha-beta-gamma filter, like
you did, but maybe this is a better idea!). Are you interested in sharing
your excel sheet?

thanks,
best
Tobias
HB9FSX




On Wed, Apr 20, 2022 at 1:18 AM Erik Kaashoek <e...@kaashoek.com> wrote:

Tobias,
Your Kalman post triggered me to study this "dummy" level introduction
to Kalman filters: https://www.kalmanfilter.net/
and now I'm trying to write the Kalman filter.
An example of a Kalman filter described in the link above uses distance,
speed and acceleration as predictors for a one dimensional Kalman filter
So I wondered if it would be possible to do the same for the phase, eg
use phase, frequency and drift in the prediction model and still have a
one dimensional model?
The Kalman filter thus uses the normalized phase as input (the measured
time difference between the 1 PPS and the 1 second pulse derived from
the VCO output) to predict the next phase, frequency and drift
p = phase
f = frequency (d Phase/d T)
d = drift (d Frequency / d T)
m = measured phase
with a 1 second interval between observations the state extrapolation
(or prediction) is
p[k+1]  = p[k] + f[k] + 0.5*d[k]
f[k+1] = f[k] + d[k]
d[k+1] = d[k]
and the state update is
p[k] = p[k-1] + alpha*( m - p[k-1])
f[k] = f[k-1] + beta*(m-p[k-1])
d[k] = d[k-1] + gamma*(m-p[k-1])/0.5)

In a small simulation (in excel) I used a VCO that initially is on the
correct frequency, after some seconds Vtune has a jump and after some
more seconds the Vtune starts to drift.
The Vtune of the VCO can have noise(system error) and the m can have
noise (measurement error).
The p,f and m nicely track the input values even when measurement noise
is added but after this I'm stuck as I do not yet understand how to
calculate the alpha, beta and gamma values from the observed measurement
error as the above website does not yet explain how to do this.
And I do not understand how to close the loop to either keep the phase
error or frequency error as low as possible.
So much to learn.
Erik.

On 16-4-2022 22:44, Pluess, Tobias via time-nuts wrote:
Hallo all,

In the meantime I had to refresh my knowledge about state-space
representation and Kalman filters, since it was quite a while ago since I
had this topic.

So I looked at the equations of the Kalman filter. To my understanding,
we
can use it like an observer, and instead of using the phase error and
feeding it to the PI controller, we use the output of the Kalman filter
as
input to the PI controller. And the Kalman filter gets its input from the
phase error, but we also tell it how much variance this phase error has.
Luckily, the GPS module outputs an estimate of the timing accuracy, so I
believe one could use this (after squaring) as the estimate of the timing
variance, correct?

I believe depending on how we model the VCO, we can get away with a
scalar
Kalman filter and circumvent the matrix and vector operations.
I tried to simulate it in Matlab, and it kind of worked, but I noticed
some
strange effects.

a) I made a very simple VCO model, that simulates the phase error. It is
x[k+1] = x[k] + KVCO * u with u being the DAC code. If KVCO is chosen
correctly, this perfectly models the phase measurements. I assumed the
process noise is zero, and the 1PPS jitter contributes only to the
measurement noise.

b) from the above model, we have a very simple state space model, if you
want to call it like this. We have A = 1, B=KVCO, C=1, D=0.

c) in the "prediction phase" for the Kalman filter, the error covariance
(in this case, the error variance, actually) is P_new=APA' + Q, which
reduces in this case to P_new=P+Q with Q being the process noise
variance,
which I believe is negligible in this case.

d) in the "update phase" of the Kalman filter, we find the Kalman gain as
K=P*C*inv(C'*P*C + R), and this reduces, as everything is scalar and C=1,
to K=P/(P+R), with R being the measurement noise, which, I believe, is
equal to the timing accuracy estimate of the GPS module. Correct? we then
update the model xhat = A*xhat + b*u + K*(y-c*xhat), which simplifies to
xhat=xhat + Kvco*u + K*(y-xhat). Nothing special so far.

e) now comes my weird observation. I don't know whether this is correct.
The error covariance is now updated according to P=(I-K*C)*P, this breaks
down to P=P-K*P. I now observe that P behaves very odd, first we set
P=P+Q,
and then we set P=P-K*P. It does not really converge in my Matlab
Simulation, and I see that the noise is filtered somewhat, but not very
good. It could also be related to my variances not being correctly set, I
am not sure. Or I made some mistakes with the equations.

Any hints?

As soon as I see it sort of working in Matlab, I want to test it on my
GPSDO. Especially the fact that I have an estimate of the timing error
(the
GPS module announces this value via a special telegram!) I find very
amazing and hope I can make use of this.

best
Tobias
HB9FSX



On Tue, Apr 12, 2022 at 11:23 PM glen english LIST <
glenl...@cortexrf.com.au>
wrote:

For isolating noise, (for the purpose of off line analysis)  , using ICA
(Independent Component Analysis) , a kind of blind source separation ,
can assist parting out the various noises to assist understanding the
system better . There are some Python primers around for it.

fantastic discussion going on here. love it.

glen


On 12/04/2022 6:42 pm, Markus Kleinhenz via time-nuts wrote:
Hi Tobias,

Am 11.04.2022 um 13:33 schrieb Pluess, Tobias via tim
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
send an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Reply via email to