Looks to work on the same basic Phase detector principle as Brooks Shera's
GPS unit,
It then uses a DDS freq driven from the 100 MHz clock to generate the PPS
and a FPGA for the processor logic.
It Disciplines the PPS using a standard digital PI controller method that is
set for a short 10 sec plus Time constant.
This way they get the short term noise of the cheap clock Osc and the long
term stability of the GPS.
The reason it works is that ANY xtral Osc has less ADEV noise at one Hz than
the GPS does,
so the PPS 1 to 10 sec jitter goes down, but for any time much longer than
that, it doesn't help, depending how good their clock is.
I think you can take them at their word that type of scopes didn't mater and
they did report what the 1 sec ADEV value was.
It is safe to assume it did not help much above about 10 sec, unless they
where to use a very good system clock oscillator and a longer TC for the PI
controller.
The basic idea works fine to reduce the 1 sec GPS jitter to 1 ns.
I did the same sort of thing using a standard DIP osc, but only needed the
FF which I used as the phase detector and a couple of RCs.
The rest of the stuff is really not all that necessary.
ws
*******************
----- Original Message -----
From: "Magnus Danielson" <[email protected]>
To: "Discussion of precise time and frequency measurement"
<[email protected]>
Sent: Sunday, April 11, 2010 2:04 PM
Subject: Re: [time-nuts] GPS PPS smoothing article
[email protected] wrote:
I see other problems in the paper as well, for example why did they just
plot a time-domain plot of before and after which doesn't really say
much. It "looks" less noisy, but that doesn't mean anything. The peak to
peak is very similar.
Instead they should have put that data into the freeware plotter, or
Stable-32 etc, and shown ADEV plots that can actually discern between
before and after.
That they didn't use a time-interval counter but scopes kind of indicates
that they may not have been sufficiently equipped for this task.
Also, as Matt mentioned if they do only have two dis-similar scopes, then
they could have swapped the scopes, and do an interpolation between the
two datasets.
At least they could have swapped scopes to (hopefully) show that it did
not make any major shift in results. If they did get shifts, then they
would have concluded that the approach was not sufficient, reported that
and then continued with another approach.
Or they could have used the "trigger" input in the 500MHz scope (it has
one for sure) for the reference, and channels 1 and 2 for the
before/after sources.
What if they would have used any of a line of suitable time-interval
counters?
They should have done their ADEV and TDEV plots for sure. Implementing
them for any of their available sources should not be too hard either.
They do tell that they are using a 100 MHz clock if you look at page 3,
column 2 (with a line-break between 100 and MHz so it is a little hard to
spot).
Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.