Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Ulrich Bangert
 measure longer times. Within 500
µs the counter may reach an result of maximum 12288 (and not more
12288000) and 1 count equals app. 1E-4 of the maximal result. Which
effectively means that THIS measurement has only 1/1000 the resolution
of the original 500 ms measurement. Can this be true? Think abaout it
for a while and you will see its true. The 10 Kelvin temperature effect
that made an count difference of 123 counts in 500 ms will make an count
difference of 0.123 (!) counts in 500 µs. Which is less than 1 count and
will be VERY difficult to be noticed if possible at all. So one or two
clues of the Shera design is/are to make the measurement range of the
phase comparator THAT small that all environmental depencies of the
tic's time base are SMALLER than the RESOLUTION of the time base. Choose
an resolution sufficiently low and all environmental effects of the time
base will be masked by it.

And that is exactly where your consideration is going to get wrong: The
limited resolution of the Shera design (as well as the limted phase
measurement range) is NOT an FLAW of the design that could be improved
by your 100 MHz tic! It is an FEATURE of the design that may not be
touched in order to give the proposed results! And the fact that you are
NOT observing a real improvement although you increased the resolution
by 4 is the proof for it all: Not only did you increase the resolution
by 4, you also increased the count result's tendency to be influenced by
environmental changes by the same factor. You should notice a big
improvement if you throw away your 100 MHz oscillator to where it
belongs and feed your counters with an 100 MHz signal that has been
generated by an X10 frequency multiplication of your rubidium or by
phase locking an 100 MHz VCXO to the rubidium.

Best regards
Ulrich Bangert 

P.S.

The reaction of rubidium oscillators to environmental changes like the
day-to-day temperature changes to happen in a typical flat have not yet
been discussed in the group in depth. However my own experience seconds
your own results concerning the loop's time constant. While the overall
temperature coefficient of of my rubidiums is an order of magnitude
better than that of my best OCXO it is not possible to use a higer time
constant with them compared to the OCXO when the day-to-day changes are
expected to be removed by the loop. Over the last years a natural loop
time constant of app. 1200 s has evolved to be the best compromise for
both the OCXO and the rubidiums. Since my OCXO has MUCH less phase noise
at small observation times I have come to the conclusion that an OCXO
based GPSDO serves me better than an rubidium based one.  

 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] Im Auftrag von Richard H McCorkle
 Gesendet: Donnerstag, 19. Juli 2007 09:09
 An: time-nuts@febo.com
 Betreff: [time-nuts] Metastability in a 100 MHz TIC
 
 
 ); SAEximRunCond expanded to false
 Errors-To: 
 time-nuts-bounces+df6jb=ulrich-bangert.de+df6jb=ulrich-bangert
 [EMAIL PROTECTED]
 
 In my Brooks Shera style LPRO rubidium controller I am using 
 the same HC4046 input conditioner and divide down counter on 
 the oscillator and HC4046 phase detector interrupting the PIC 
 as used in the original design. The phase detector output 
 feeds the count enable input of a pair of Fairchild 74F163A 
 synchronous binary counters clocked with a 100 MHz XO to 
 increase the TIC sample resolution to 10ns. The counters are 
 read and cleared every second and accumulated in software to 
 minimize glitches from multiple gating into the counter. A 
 23-bit DAC and LM199 reference are used to improve the EFC 
 resolution, applying 0-5v directly to the LPRO EFC input to 
 minimize noise pickup and maximize loop gain. A 16F688 PIC 
 monitors the GPS messages and accumulates sawtooth 
 corrections until read at the update time over a high-speed 
 200kbps 3-wire handshaking serial interface by the 16F873A 
 main controller. The handshaking interface allows the 16F688 
 to transmit the accumulated sawtooth correction for the 
 current sample to the controller and reset its accumulator 
 between UART reads to prevent data loss and before the TRAIM 
 message for the next sample arrives to insure the predictions 
 match the samples.
A 4x larger setpoint and 1/4 the filter gain of the 
 original design are used to adjust for the larger counts 
 returned with a 100 MHz TIC. This keeps the controller gain 
 and limiting threshold approximately the same as the original 
 design to prevent excessive limiting of the input data into 
 the filter at high phase offsets and maintains good initial 
 lock performance. Since the 1-second stability of a rubidium 
 oscillator is relatively poor, and the 100-second stability 
 is much better, the loop update time was increased in the 
 rubidium controller from 30 to 120 seconds. The longer update 
 time results in 1/4 the number of updates to the EFC for 
 improved stability, and 4x

Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Dr Bruce Griffiths
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Ulrich Bangert wrote:
 Richard,

 metastability is an effect that happens when the setup times of an
 d-flipflop are not met. This can happen (with a certain statistical
 likelyhood) when the sources of the data input and the clock input of an
 d-flipflop are not synchronized. The important thing to know about
 metastability is that the likelyhood of its appearance might be directly
 computed from the setup time and the frequency of the d- and
 clock-signal as described in

 http://www.xilinx.com/xlnx/xweb/xil_tx_display.jsp?sGlobalNavPick=sSeco
 ndaryNavPick=category=iLanguageID=1multPartNum=1sTechX_ID=pa_metasta
 bility

 Peter Alfke is THE expert in metastability at XILINX! I guess if you
 apply the data presented here to your case you will find that the
 probabilty of metastability in your case may be neclected. Roumors are
 that 99% of all assumed cases of metastability are due to other design
 flaws. 

 However I would like to draw your attention to an second point. From
 your posting it gets clear that you are not the pure user of the Shera
 design but have otherwise put a lot of brains into the question of how
 to improve it. 

 First, forget about the Shera design for a moment and consider the case
 that you have two 1pps sources and want to compare them by means of an
 REAL tic as the HP5370 or the SR620. Question: Since you are comparing
 TWO oscillators by means of an THIRD oscillator (the tic's time base),
 does the tic's time base stability influence your measurement results or
 not?

 Clearly so, if you think about it for a while. With this arrangement it
 is not possible to decide whether 1pps a or 1pps b or the tic's time
 base are responsible if you notice statistical fluctuations in the
 measurement results. The measured results will be an statistical average
 (not an simple arithmetic one but an more complicated one, but basically
 you can imagine it as an average) of ALL source's fluctuations. The
 situation changes if you have more sources and/or more tics available
 because there are statistical methods available to allocate which source
 and which tic is responsible for what but in the simple case of only
 three sources these rules cannot be applied. 

 Now that you aware of the fact that the tic's timebase has an impact on
 measurements made with the tic what would you do about it? In the real
 world you would synchronize the tic's time base to the best reference
 available, for example to the cesium in the backyard or the H2 maser in
 the kitchen. But what if you lack equipment like that and have only this
 one rubidium oscillator and this gps receiver? Clearly the second best
 choice is to use the rubidium also as the timebase for the tic EVEN if
 it IS the source that you want to discipline, just because you reduce
 the complexity of the problem back to TWO sources of fluctuations.

 Now let us come back to the Shera circuit. The question that must be put
 forward at this point is: If we have just recognized that the tic's
 timebase has pretty much the same influence on our mesurements as the
 duts itself, how can the Shera design work with an timebase consisting
 of a garden variety canned oscillator of the lowest class of stability?
 If the above explained claims are true and the measurement results are
 the statistical average over ALL sources then in your case this cheesy
 little timebase is by some orders of magnitude worse in terms of
 stability compared to the rubidium and the gps and what we measure
 should in theory be dominated by the bad time base and not by the duts.
 So, how can the circuit work at all ??? 

 At this point we come to one of the big but not commonly well understood
 tricks of the Shera design. The cheap canned timebase IS indeed the
 biggest source of fluctuations in the design. However the design
 includes precautions so that these fluctuation are hindered to show up.
 Howzat?

 Consider two 1pps signals. They can be as close as 0 s or they can be
 apart as much as 500 ms. Consider they are 500 ms apart and you have an
 timebase of 24.576 MHz to measure how far they are apart. With 500 ms
 your tic will reach something like 12288000 counts in that time. Among
 other environmental depencencies the coefficent of temperature will be
 the most prominent one with simple xtal oscillators being in the order
 of 1E-6/Kelvin. With 10 Kelvin temperature variation this will give you
 an change of app. 123 counts in the count result for the SAME 500 ms
 just due to temperature. This is an noticeable effect! Even the 10th
 part of it, 12 counts would be an noticeable effect. But now comes the
 clue: Both effects are noticeable because and ONLY because we made an
 HIGH RESOLUTION MEASUREMENT. With 12288000 counts 1 count equals less
 then 1E-7 of the result, so we made an measurement with better than 1E-7
 resolution. Now consider the case when we limit the measurement range of
 the phase 

Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Richard H McCorkle
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Thanks guys for the input,
I wanted to clarify that the time interval being measured at each
GPS 1PPS is typically 2/5 of a 1.6us window (Rb/16) or 640ns at
setpoint. The filter input value is built by accumulating (not
averaging) 120 of these samples and sawtooth correcting the result
before filtering. This gives a filter input value with 10ns per
sample / 120 samples or 83ps per count resolution per 120-second
update. The original 16-bit filter was extended to 24-bits to
improve the filter resolution with the 23-bit DAC. The disciplined
oscillator stability using this inexpensive design is orders of
magnitude better than the 1e-11 per day specification of the
original Shera design.
  I am just powering up a similar version with a variable update
rate on an MTI 260 double oven OCXO to see what the stability looks
like with a respectable OCXO. (I finally found one MTI 260 out of
six I tested that didn’t jump in phase every few weeks!) Once I
have this unit up and stable and have some baseline data I will
pull out my Vectron and Bliley 100 MHz OCXOs and try Ulrich’s
suggestion of using one of them for the TIC clock to see if that
improves the disciplined oscillator stability. They both have
around 1e-9 stability and should be enough better than an XO to
see if further improvement in disciplined oscillator stability
is possible using this design.
  The reason I went this route was to see what improvements could
be made to create a high performance $50 controller using readily
available DIP packaged parts that could be assembled on a perf
card over a weekend by the average hobbyist. Age can do bad things
to the eyes and hands that make surface mount components and high
density IC pinouts hard to deal with for some older hobbyists.

Thanks again,
Richard

 Original Message 
-
Subject: Re: [time-nuts] Metastability in a 100 MHz TIC
From:Tom Van Baak [EMAIL PROTECTED]
Date:Fri, July 20, 2007 6:57 am
To:  Discussion of precise time and frequency measurement 
time-nuts@febo.com
---

 Question: Since you are comparing
 TWO oscillators by means of an THIRD oscillator (the tic's time base),
 does the tic's time base stability influence your measurement results or
 not?

Partly yes, for tau  1 second.
Mostly no, for tau  1 second.

 Clearly so, if you think about it for a while. With this arrangement it
 is not possible to decide whether 1pps a or 1pps b or the tic's time
 base are responsible if you notice statistical fluctuations in the
 measurement results. The measured results will be an statistical average

Perhaps I misunderstand your setup, but it seems to the
third timebase is only used for a time interval measurement,
not the time measurement. Thus the requirements on its
stability are much less than the two 1pps sources. Think of
it not as a timebase, but a time interval base.

For example, suppose you want to measure 1pps sources
which are within 10 microseconds in phase, to a resolution
of 100 ps. To make an ADEV plot for tau 1 second to 1 day,
you need to collect days of data, but you only need a
TIC timebase that is accurate and stable to one part in ten to
the 5th, at tau of 10 us. Any cheap XO will do that, no?

You don't need cesium timebases for a 1pps TIC.

/tvb


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Richard H McCorkle
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Tom,
The GPS 1ns sawtooth corrections are accumulated in a 16F688 during the
120 second sample period with care that they match the same 120 1-second
10ns resolution phase samples collected. The accumulated sawtooth
correction is read at the end of the sample period before the sawtooth
correction for the next sample is sent by the GPS, scaled to match the
accumulated phase count resolution and added to the phase count before
the value is sent to the filter. This simplifies the design and has the
same effect as adding a scaled sawtooth correction matching the counter
resolution from each sample once per second. I do indeed make sure both
numbers cover the same samples and have the same LSB resolution before
adding so the results are valid.

Richard


 Richard,

 A 1.6 us window mean you have almost no issues with
 the accuracy or stability of your 100 MHz sample clock.
 10 ns out of 1.6 us is 1/2 percent; clock counts won't
 exceed 160; a quartz timebase is overkill.

 Do I understand correctly: you make each raw 1pps
 time interval measurement down to 10 ns resolution,
 then (in software, I presume, one second later) apply
 a negative sawtooth correction with 1 ns resolution, then
 average 120 of those sums, and then expect a 84 ps
 resolution result? Something doesn't sound quite right.

 /tvb

 - Original Message -
 From: Richard H McCorkle [EMAIL PROTECTED]

 Thanks guys for the input,
 I wanted to clarify that the time interval being measured at each
 GPS 1PPS is typically 2/5 of a 1.6us window (Rb/16) or 640ns at
 setpoint. The filter input value is built by accumulating (not
 averaging) 120 of these samples and sawtooth correcting the result
 before filtering. This gives a filter input value with 10ns per
 sample / 120 samples or 83ps per count resolution per 120-second
 update. The original 16-bit filter was extended to 24-bits to
 improve the filter resolution with the 23-bit DAC. The disciplined
 oscillator stability using this inexpensive design is orders of
 magnitude better than the 1e-11 per day specification of the
 original Shera design.
   I am just powering up a similar version with a variable update
 rate on an MTI 260 double oven OCXO to see what the stability looks
 like with a respectable OCXO. (I finally found one MTI 260 out of
 six I tested that didn’t jump in phase every few weeks!) Once I
 have this unit up and stable and have some baseline data I will
 pull out my Vectron and Bliley 100 MHz OCXOs and try Ulrich’s
 suggestion of using one of them for the TIC clock to see if that
 improves the disciplined oscillator stability. They both have
 around 1e-9 stability and should be enough better than an XO to
 see if further improvement in disciplined oscillator stability
 is possible using this design.
   The reason I went this route was to see what improvements could
 be made to create a high performance $50 controller using readily
 available DIP packaged parts that could be assembled on a perf
 card over a weekend by the average hobbyist. Age can do bad things
 to the eyes and hands that make surface mount components and high
 density IC pinouts hard to deal with for some older hobbyists.

 Thanks again,
 Richard

  Original Message 
 -
 Subject: Re: [time-nuts] Metastability in a 100 MHz TIC
 From:Tom Van Baak [EMAIL PROTECTED]
 Date:Fri, July 20, 2007 6:57 am
 To:  Discussion of precise time and frequency measurement
 time-nuts@febo.com
 ---

 Question: Since you are comparing
 TWO oscillators by means of an THIRD oscillator (the tic's time base),
 does the tic's time base stability influence your measurement results or
 not?

 Partly yes, for tau  1 second.
 Mostly no, for tau  1 second.

 Clearly so, if you think about it for a while. With this arrangement it
 is not possible to decide whether 1pps a or 1pps b or the tic's time
 base are responsible if you notice statistical fluctuations in the
 measurement results. The measured results will be an statistical average

 Perhaps I misunderstand your setup, but it seems to the
 third timebase is only used for a time interval measurement,
 not the time measurement. Thus the requirements on its
 stability are much less than the two 1pps sources. Think of
 it not as a timebase, but a time interval base.

 For example, suppose you want to measure 1pps sources
 which are within 10 microseconds in phase, to a resolution
 of 100 ps. To make an ADEV plot for tau 1 second to 1 day,
 you need to collect days of data, but you only need a
 TIC timebase that is accurate and stable to one part in ten to
 the 5th, at tau of 10 us. Any cheap XO will do that, no?

 You don't need cesium timebases for a 1pps TIC.

 /tvb


 ___
 time-nuts mailing list -- time-nuts@febo.com

Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Dr Bruce Griffiths
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Tom
Tom Van Baak wrote:
 Bruce,

 I like your point about the random quantization error in the
 sawtooth. Yes, that would help the noise by a few dB.

 On the other hand it would also seem the 10 ns resolution
 of the TIC is the limiting factor (by an order of magnitude)
 over the 1 ns resolution limit of the sawtooth corrections,
 so improving the quality of the sawtooth corrections has
 limited gain.

 Now, I'd still like to pursue the issue of noise in the 100 MHz
 oscillator. Do we agree one doesn't need a cesium for this?
 Or even an XO?

   
As long as one keeps track of the frequency drift of the timebase 
oscillator this is true.
 True, you want some accuracy in the 100 MHz. But the counts
 are only integers from 0 to 160 so the accuracy requirement
 is just 3 digits, 0.1% (so cheap quartz, at 1e-6, or cesium, at
 1e-13, is extreme overkill). I mean, almost anything wiggling at
 100.0 MHz will serve as an adequate timebase.

 Also, as you point out, instability or jitter is your friend, not
 your enemy in this case. Would it be possible to introduce
 the +/- 5 ns jitter deliberately in the 1pps trigger level instead
 of in the timebase? I.e., slow down the rising edge enough
 so that you get jitter for free?
   
Yes adding stochastic jitter to the leading edge of the PPS signal is a 
good way of injecting the required noise into the measurement.
The most predictable way is to slow down the PPS edge (a simple RC 
filter should be more than adequate ) and feed it into one input of a 
comparator whilst the other input is connected to a noise source.
 Another solution might be to deliberately choose an inaccurate
 and unstable oscillator; use the 1pps to count oscillator cycles
 per second, as well as to count the time interval. The larger
 count can be used to calibrate the smaller count on every count.
 This gives all the jitter you need and avoids any injection issues.

   
Yes a suitably noisy LC oscillator should work, will probably need to 
reduce the tank Q somewhat to achieve sufficient noise/jitter.
Deliberately using resistors to introduce predictable noise may be a 
useful technique.

Attenuating the sinusiodal ouput of an oscillator that is too stable 
using a resistive attenuator with a high output resistance may also be a 
viable technique for producing a sufficiently noisy clock.
 While you're at it, how about N of these oscillators, each making
 its own out of phase measurement of the same OCXO-GPS 1pps.
 Another couple of dB of resolution...

 /tvb
   
This is getting way to complicated, surely the method of using the 
inherent noise in the hardware corrected PPS signal in conjunction with 
a D flip flop acting as a simple precedence detector (as proposed 
several weeks ago) is easier, cheaper and simpler, it also has more 
resolution than almost any other method one can devise. It can easily be 
elaborated slightly to allow detection and rejection of phase error 
measurement outliers.

Clock the D flip flop with the hardware corrected PPS signal, connect 
the divided down OCXO (or other standard) being disciplined to the D 
input. Interrupt the microprocessor on the trailing edge of the PPS 
signal, read the Q output of the D flip flop (the 200 milliseconds PPS 
width of the PPS signal from an M12M or similar GPS timing  receiver is 
more than adequate to allow the D flipflop to settle with an extremely 
low probability of being in a metastable state- even a few microseconds 
is probably more than sufficient) and add 1 to the phase count whenever 
D is 1 subtract 1 whenever D is 0. The EFC voltage is then adjusted to 
keep the phase error at zero (corresponds to a 50% probability of the D 
flipflop output being 1 when read by the micro). This simplistic 
algorithm can be replaced by a more optimum algorithm as required.

This can all be built with a few DIP ICs to ease assembly, however a 
well designed 2 layer PCB with a ground plane is probably advisable. 
Other than a programmable delay line and a high resolution DAC no 
unusual parts are required.

Instead of using one microprocessor to do everything several of simpler 
processors each dedicated to a particular function may be better.
One processor could be dedicated to hardware correction of the PPS 
signal, whilst another can implement the phase error measurement and the 
OCXO control loop.

Bruce

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Dr Bruce Griffiths
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Alan Melia wrote:
 Bruce I find this an interesting thread...one maybe naive thought..
 it would be nice to have atoo-good stability on the 100MHz TIC but
 detracts from the averaging (My interpretation), this almost suggests to me
 that a small amount of noise modulation which of course would be random,
 controlled,  and not biassed in a way to affect the accuacy of the driving,
 should be added to the 100MHz TIC OCXO. Would that counter the problems on
 uncharacterised drifting and still allow long averaging.?? Maybe even a slow
 unsynchonised low frequeny sine wave FM would achieve the same effect. It
 would seem this would be better than relying on processes which are unknown
 and not controlled to provide the effect.It is counter intuitive to
 intentionally degrade a standard in some respects but has been shown to
 work in some cases.

 Alan G3NYK
   
Alan

Using a slow unsynchronised sinewave is not the way to go a noise source 
is better and is easily implemented.
Essentially the technique used by HP in one of their counters would 
suffice, phase modulate say a 10MHz signal by a few degrees and then 
multiply the output by 10
to produce a 100MHz signal with 10x the phase modulation. This 
simplifies the design and construction of the phase modulator. A diode 
double balanced mixer can be used to phase modulate a signal by the 
required amount. Feed the LO port of the mixer with the signal to be 
modulated apply the modulation signal to the IF port and add the output 
of the RF port in phase quadrature with the original signal.

The drawback is the complexity and the fact that the resolution is still 
inadequate to achieve the maximum performance from the better GPS timing 
receivers with a good antenna and site. The simpler and cheaper D 
flipflop precedence detector used together with hardware sawtooth 
correction has far higher resolution. It also has the advantage of not 
requiring any high frequency clocks.

Bruce

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Tom Van Baak
 The simpler and cheaper D 
 flipflop precedence detector used together with hardware sawtooth 
 correction has far higher resolution. It also has the advantage of not 
 requiring any high frequency clocks.
 
 Bruce

Since Rick  Dr TAC brought it up some months ago, does
anyone have measurements for this approach yet?

Also what is its equivalent resolution; i.e., what resolution
would a conventional TIC need to be to match the behavior
of the D-flipflop approach, all other factors equal?

/tvb


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-20 Thread Dr Bruce Griffiths

Tom Van Baak wrote:
The simpler and cheaper D 
flipflop precedence detector used together with hardware sawtooth 
correction has far higher resolution. It also has the advantage of not 
requiring any high frequency clocks.


Bruce



Since Rick  Dr TAC brought it up some months ago, does
anyone have measurements for this approach yet?

Also what is its equivalent resolution; i.e., what resolution
would a conventional TIC need to be to match the behavior
of the D-flipflop approach, all other factors equal?

/tvb
  

Tom

With say a 2ns rms noise on the corrected PPS output, a TIC would need 
an rms quantisation noise less than 1/3 of this to avoid significantly 
degrading the measurement noise.
The corresponding TIC resolution would be about 7ns (140MHz). However to 
achieve accurate averaging the required rms jitter at the TIC input is 
around 1 clock period which implies that a TIC clock period of around 
2ns (500MHz) is required. If the receiver timing noise is greater then a 
lower frequency clock can be used. The D flipflop approach produces a 
degradation of less than 2dB with respect to an ideal TIC with infinite 
resolution. The other advantage is that such a 1 bit TIC automatically 
adapts to the timing noise that is present.


The most cost effective way of achieving perhaps a dB or so improvement 
is to use an ADC as a TIC, sampling an input sinewave produced by 
dividing down and low pass filtering the output of the OCXO on the 
leading edge of the PPS signal. A resolution equivalent to using a 1GHz 
or faster clock is easily achieved. The cost of suitable ADCs is 
relatively low.


Neither of these solutions requires using clocks faster than 10MHz or 
so. Nor are particularly esoteric high speed logic devices required. 
Although ACMOS devices are desirable, even HCMOS devices should be 
satisfactory, particularly if given 200 millisec to resolve any 
metastable state.


The circuit schematic attached also provides 100ns (with a 10MHz clock) 
guardbands either side of the edge that is locked to the hardware 
corrected PPS signal.
For example U103 3Q, 4Q, 5Q are used as precedence detectors the PPS 
being locked to  U102 Q4 with  Q3 and Q5 acting as guardbands 
transitions to allow detection of when the PPS leading edge is 100ns or 
more away from  the transition on U102  Q4. This allows rejection of 
outliers.


Bruce


1 bit TIC phase detector.pdf
Description: Adobe PDF document
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

[time-nuts] Metastability in a 100 MHz TIC

2007-07-19 Thread Richard H McCorkle
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

In my Brooks Shera style LPRO rubidium controller I am using
the same HC4046 input conditioner and divide down counter on
the oscillator and HC4046 phase detector interrupting the PIC
as used in the original design. The phase detector output
feeds the count enable input of a pair of Fairchild 74F163A
synchronous binary counters clocked with a 100 MHz XO to
increase the TIC sample resolution to 10ns. The counters are
read and cleared every second and accumulated in software to
minimize glitches from multiple gating into the counter. A
23-bit DAC and LM199 reference are used to improve the EFC
resolution, applying 0-5v directly to the LPRO EFC input to
minimize noise pickup and maximize loop gain. A 16F688 PIC
monitors the GPS messages and accumulates sawtooth corrections
until read at the update time over a high-speed 200kbps
3-wire handshaking serial interface by the 16F873A main
controller. The handshaking interface allows the 16F688 to
transmit the accumulated sawtooth correction for the current
sample to the controller and reset its accumulator between UART
reads to prevent data loss and before the TRAIM message for
the next sample arrives to insure the predictions match the
samples.
   A 4x larger setpoint and 1/4 the filter gain of the original
design are used to adjust for the larger counts returned with
a 100 MHz TIC. This keeps the controller gain and limiting
threshold approximately the same as the original design to
prevent excessive limiting of the input data into the filter
at high phase offsets and maintains good initial lock
performance. Since the 1-second stability of a rubidium
oscillator is relatively poor, and the 100-second stability
is much better, the loop update time was increased in the
rubidium controller from 30 to 120 seconds. The longer update
time results in 1/4 the number of updates to the EFC for
improved stability, and 4x more samples accumulated per update
to provide a better indication of true rubidium oscillator
stability. Without increasing the controller gain and using
a TIC with 4x the resolution of the original design over a
sample period 4x longer than the original design the loop gain
is 16x greater than the original design for proper loop
damping with the rubidium oscillator.
   I originally assumed the 4x longer filter times that result
with the longer update time would be an advantage with a
rubidium oscillator. Testing revealed that proper correction
for daily temperature variations prevented using filter modes
with settle times longer than about half a day, or what the
Mode 7 (Tau = 8K sec) IIR filter in the original design
provides. The longer update time made the top two filter
modes settle in about 1 and 2 days and were not fast enough
in correcting for temperature variations to maintain optimum
long-term stability. Adjusting the mode scaling to 1/4 the
original value to compensate for the longer update time
restored the original range of IIR filter times.
   With the discussions here on metastable states in TIC
counters, I am asking the experts on the list for their
opinion if the performance of this design would improve
by adding a shift register synchronizer between the phase
detector output and the count enable input of the 74F163A
TIC to reduce metastable states. The 74F series has the
best reliability figures from metastable effects of all
the TTL logic families according to the data I have read.
Each D F/F counter cell in the 74F163A has the clock applied
directly to the F/F, so no clock gating occurs. Instead the
input data is gated by count enable signals for each cell and
either the cell output is sent to the D input if the count
enable is low, or the previous cell output is gated into the
D input on carry if the count enable is high with D latched
into all F/Fs on each clock rising edge. While I see the need
for a synchronizing shift register in a gated clock design
like the original Shera controller, is it necessary for best
performance in a GPSDRO application with a 74F163A 100 MHz TIC?



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Metastability in a 100 MHz TIC

2007-07-19 Thread Dr Bruce Griffiths
); SAEximRunCond expanded to false
Errors-To: [EMAIL PROTECTED]

Richard H McCorkle wrote:

With the discussions here on metastable states in TIC
 counters, I am asking the experts on the list for their
 opinion if the performance of this design would improve
 by adding a shift register synchronizer between the phase
 detector output and the count enable input of the 74F163A
 TIC to reduce metastable states. The 74F series has the
 best reliability figures from metastable effects of all
 the TTL logic families according to the data I have read.
 Each D F/F counter cell in the 74F163A has the clock applied
 directly to the F/F, so no clock gating occurs. Instead the
 input data is gated by count enable signals for each cell and
 either the cell output is sent to the D input if the count
 enable is low, or the previous cell output is gated into the
 D input on carry if the count enable is high with D latched
 into all F/Fs on each clock rising edge. While I see the need
 for a synchronizing shift register in a gated clock design
 like the original Shera controller, is it necessary for best
 performance in a GPSDRO application with a 74F163A 100 MHz TIC?

   


It is always advisable to use a synchroniser to substantially reduce any 
bias in the averaged phase due to metastability.
However unless there is very high isolation between the 100MHz XO and 
the LPRO output as well as the 100MHz XO the divided down output of the 
LPRO injection locking of the 100MHz oscillator may be a more 
significant source of bias in the averaged phase. If the PPS signal from 
the GPS timing receiver has sufficient random noise (~10ns) then this 
should not be an issue.

When designing a synchroniser it is useful to have a quntitative model 
of the metastability characteristics of the devices used so that a 
reasonably accurate estimate of its output metastability rate can be 
calculated.


Bruce

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.