Re: [time-nuts] GPS for ntp

2014-10-21 Thread Dennis Ferguson

On 21 Oct, 2014, at 08:58 , Simon Marsh subscripti...@burble.com wrote:
 How do you map the timer counter value in to a PPS timestamp ?
 (that is, how do you turn the HW counter value in to what the OS thought the 
 time was when the event occured ?)

On the NetBSD prototype I have the clock adjustment system call
interface is expanded to deal with multiple clocks, only one
of which is the system clock.  The HW counter becomes its own
clock, which is the clock in which PPS measurements are expressed
and which is adjusted into alignment with the PPS data.  The
system clock is adjusted into alignment with the HW counter clock
using offset data from PIO polling of the clock pair.  The IEEE1588
timestamp counter becomes a third clock, which gets adjusted into
alignment with the system clock for use as a PTP server, or which
is used to adjust the system clock when operating as a client.

For the beaglebone this is probably overkill; since the clocks
are all synchronous the system-peripheral clock polling essentially
determines a constant offset, after which you can keep them in sync
by making the same relative rate adjustments to all clocks.  For the
general IEEE1588 case, however, the counter being sampled at the
ethernet interface is often clocked by a different crystal then the
clock you would prefer to use as the system clock, and the process
of steering one clock into synchronization with another needs to be
more complex.

I should note that none of these clock adjustments really requires
a PLL or other feedback control loop, nor does NTP, since no clock
hardware is actually adjusted. The crystals are all free running and
are unaffected by the adjustments. What is adjusted is instead a
paper clock, that is the adjustment is to the arithmetic done to
convert each free running counter to a time of day, and this can be
done open loop, with perfectly predictable results and with no
feedback control, by just doing the adjustment arithmetic accurately
and transparently.

The thing the PLL does for ntpd, then, is to allow it to deal with
(paper) clock adjustment interfaces which don't do the arithmetic
accurately, or at least don't tell you what they actually did, so
that the arithmetic done can only be determined by further
measurement.  This is unavoidable if you need to deal with a
big variety of operating systems, I guess, but it does make
the problem harder than if the adjustment interface is fixed and
feedback loop eliminated, leaving just the measurement problem.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Ublox neo-7M GPS

2014-08-21 Thread Dennis Ferguson

On 21 Aug, 2014, at 16:27 , Tony tn...@toneh.demon.co.uk wrote:

 David,
 
 No problem, its still set-up. As you'd expect its rock solid at 8MHz with no 
 visible jitter.
 
 Can you point me to the datasheet you're referring to? The MAX-7 and NEO-7 
 datasheets don't provide any information on clocking.
 
 In the 'u-blox 7 Receiver Description Including Protocol Specification V14' 
 the only clue as to the clocking characteristics is that the timepulse output 
 must be configured with a minimum high or low time of 50ns or pulses may be 
 lost. Make of that what you will!
 
 Tony

It sounds like this part is similar to the LEA-6T (including the 48 MHz
reference oscillator).  This white paper has some information about the
frequency output of the latter:

   
http://www.u-blox.com/images/downloads/Product_Docs/Timing_AppNote_%28GPS.G6-X-11007%29.pdf

See, e.g., figures 11 and 12.

I think the cleanliness of the 8 MHz phase is a little bit misleading,
however, since the clean part isn't actually 8 MHz.  It is instead the
frequency of the free running 48 MHz reference divided by 6, and it will
still be throwing in the occasional short or long cycle to correct the
48 MHz oscillator frequency error and make the long term average come
out at a true 8 Mhz.

In some sense this is the high frequency equivalent of the 1 PPS hanging
bridge case.  The output at any frequency has a phase error of +/- 10.5 ns
but at 8 MHz the phase error of the output changes very slowly, and can
hang near one of the extremes for long periods, so it requires very long
integration times to reduce that to zero.  If you were using this output
to drive a cleanup PLL (which I would call the DO in a GPSDO) I think
you would actually be better off using the output at 10 MHz since with
that integrating over just a few microseconds of the jitter reduces the
short term average phase error by a factor of 5, to +/- 2.1 ns, and an
odder divider might be better still.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Synergy-GPS SSR-6tru problems

2014-07-15 Thread Dennis Ferguson

On 15 Jul, 2014, at 16:32 , Bob Stewart b...@evoria.net wrote:
 I do not know what the problem is.  The UT+ works (with the appropriate 
 software).  The SSR-6tru is deaf using all the software I have available to 
 me.  I am at my wits end, and I have nothing else that I can think of to try.

I've got several of these boards and they've worked okay for me
without doing anything special.  I have not tried the software
you are using, however, nor do I use the adapter you mention
(does that adapt the connector on the SSR-6tru to a connector
that fits your UT+ board?).  I made a connector to talk
to mine directly from a 3 volt serial port in a BBB SOIC.

Since you can see the NMEA sentences the transmit side of the
serial port from the LEA-6T is clearly working and you have
the baud rate right, but your symptoms suggest the module doesn't
hear you.  Have you tried looking at the basic connection, i.e.
that the serial port receive pin on the module wiggles at the
right voltage and polarity when the software tries to send
stuff (maybe there are two ways to plug in the adapter, only
one of which works)?  If that looks okay then the only other
guess I can think of is that the software is trying to talk
to the board with u-Blox binary messages but that protocol has
been turned off for input on the port (the PUBX,41 NMEA sentence
can turn it on and off), but that seems unlikely since, no
matter how I reconfigure mine, a power-on reset always sees
the serial port come up willing to receive either protocol.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PI Math question

2014-04-18 Thread Dennis Ferguson

On 16 Apr, 2014, at 09:50 , WarrenS warrensjmail-...@yahoo.com wrote:
 With the values of K1, K2  K3 constant,
 and the initial state of I#1, I#2 and Last_Input all zero
 assuming there is no rounding, clipping or overflow in the math
 and that if I've made any obvious dumb typo errors that they are corrected,

If we assume that your 'Input' value is a real-valued measurement with
an unlimited range then I think your algebra is correct.  All those
rearrangements will produce the same value of 'Output'.

Note, though, that 'Input' doesn't have to be a value like that, and
overflow in the math may be unavoidable.  It depends on the nature
of the sensor producing the value.  For the particular case that might
be relevant here, suppose 'Input' is the output of a phase error detector
with an output limited to a range proportional to [-180, 180] degrees
(i.e. is truncated to a fraction of one cycle) and the job of the controller
is to try to keep that value at 0.  Because the output of the sensor wraps
around at +/- 180 you will want to do certain computations (in this case,
differences between 'Input' values) with modular arithmetic.

For an example of the difference this makes, assume that your first 8
'Input' values are

40 80 120 160 -160 -120 -80 -40

noting that (((-160) - (160)) mod 180) == 40.  If you run these values
through your first and last set of equations I think you'll find the
value of 'Output' diverges at the wrap-around.

I think the practical issue here is that if the basic PI phase-locked
loop, as expressed by your last set of equations, has a long time
constant it may fail to lock if the phase detector output wraps around
like that and the initial frequency error is large enough to make the
wrap-arounds occur frequently.  The addition of the FLL term with its
modular difference, as your first set of equations has it, will widen
the capture bandwidth of the loop by keeping the integral term moving
in the right direction until you get to a point where the frequency is
close enough that the PLL becomes effective, at which point the behaviour
of the loop becomes that of the PLL alone.  The latter is what you've
demonstrated.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] First success with very simple, very low cost GPSDO

2014-04-11 Thread Dennis Ferguson

On 10 Apr, 2014, at 22:06 , Chris Albertson albertson.ch...@gmail.com wrote:
 You originally described a system that counts to 5M every second.  Tom and
 others pointed out that you do not need the complete 5M count, all you need
 is the remainder of a modulo count.  The question then is, how much of a
 remainder do you need to be sure that it spans all anticipated errors in
 both the PPS and the oscillator?
 
 
 Yes this right.  I'm sure that under limited conditions I can get by
 looking only at the remainder.  It is harder than was said because the
 overflows per second rate is a non-interger but there are till only two
 flavors of seconds:  Those with N overflows and those with N+1 overflows.

I think you are seeing a complexity that isn't there.  5,000,000 is
19531 * 256 + 64, so if you get an 8-bit counter sample t0 in one
second and t1 in the next then the difference

t1 - t0

will be equal to 64 if the oscillator is on frequency, and

t1 - t0 - 64

will be equal to 0, when computed with 8 bit arithmetic.  If the result of
the latter, interpreted as a signed value, is less than zero your oscillator
is going too slow, if greater it is going too fast.  This will be true whether
there have been 19531 or 19532 counter overflows in the second; the (t1-t0)
subtraction will set the borrow bit in the latter case if you want to know
that but there is no reason to care about it.  8 bits is enough to measure a
+/- 25 ppm error in this case, which seems sufficient for any oscillator
you are likely to want to discipline with this.

All keeping the full count seems to do is require you to subtract two
much larger numbers to find the same small difference.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Another Arduino GPSDO

2014-03-29 Thread Dennis Ferguson

On 28 Mar, 2014, at 20:53 , Bob Camp li...@rtty.us wrote:
 The problem is that it can / might / could / will create a “dc bias” in the 
 noise. When you filter it, you get a bump rather than zero. If your GPSDO has 
 a 47 ns wide sawtooth, that could be a pretty big bump. There’s no way to 
 know if the bridge is seconds, minutes, or hours wide. You can make a good 
 guess that hours are a *lot* less common than seconds….

Yes, and you can get long term dc bias not only in the hanging bridge
case but, in diminishing amounts, when the sawtooth frequency passes
through 1/2 Hz, 1/3 Hz, 1/4 Hz and so on, i.e. where the sawtooth period
is an exact integer number of seconds.  I think the 4th graph here

http://www.leapsecond.com/pages/m12/sawtooth.htm

shows a 1/2 Hz sawtooth with a non-zero mean error for an extended
period.  I guess, viewed this way, the 5th graph would be the 1/1 Hz
sawtooth (which becomes 0 Hz when folded into the Nyquist bandwidth),
which is the worst case but not the only case.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Dennis Ferguson

On 12 Mar, 2014, at 23:08 , Hal Murray hmur...@megapathdsl.net wrote:
 b...@evoria.net said:
 In the moving averages I'm doing, I'm saving the last bit to be shifted out
 and if it's a 1 (i.e. 0.5) I increase the result by 1. 
 
 That's just rounding up at an important place.  It's probably a good idea, 
 but doesn't cover the area I was trying to point out.  Let me try again...
 
 Suppose you are doing:
  x_avg = x_avg + (x - x_avg) * a_avg;
 
 For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to be 
 1/8.  That's a right shift by 3 bits.  I don't think there is anything magic 
 about shifting, but that makes a particular case easy to spot and discuss.
 
 Suppose x_avg is 0 and x has been 0 for a while.  Everything is stable.  Now 
 change x to 2.  (x - x_avg) is 2, the shift kicks it off the edge, so x_avg 
 doesn't change.  (It went 2 bits off, so your round up doesn't catch it.)  
 The response to small steps is to ignore them.

Note that you can't do fixed-point computations exactly the same way
you would do it in floating point, you often need to rearrange the equations
a bit.  You can usually find a rearrangement which provides equivalent
results, however.  Let's define an extra variable, x_sum, where

x_avg = x_sum * a_avg;

The equation above can then be rewritten in terms of x_sum, i.e.

x_sum = x_sum * (1 - a_avg) + x;

With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
it right 3 bits (you might want to round before the shift) and adding x.
The new value of x_avg can be computed from the new value of x_sum with a
shift (you might want to round that too), or you could pretend that x_sum
is a fixed-point number with the decimal point 3 bits from the right.
In either case x_sum carries enough bits that you don't lose precision.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS W/10KHz

2014-02-10 Thread Dennis Ferguson

On 10 Feb, 2014, at 00:48 , Bruce Griffiths bruce.griffi...@xtra.co.nz wrote:
 Dennis Ferguson wrote:
 On 8 Feb, 2014, at 14:50 , ewkeh...@aol.com wrote:
   
 The problem with the PLL analog version is the same as with any digital
 GPSDO. The saw tooth is present at 10 KHz just like 1 Hz. To the best of my
 knowledge there is no GPS receivers out there for less than $ 1000 with out
 saw  tooth. Timing receivers output the correction value and you can either
 with  software or a variable delay do correction.
 
 This is very true, though the sawtooth at a 10 kpps sample rate is going
 to a little different than the sawtooth at a 1 pps sample rate.  The 
 frequency
 of the sawtooth noise will lie somewhere in the Nyquist bandwidth.  At a 1 
 pps
 sample rate the frequency of the sawtooth noise will hence be somewhere 
 between
 0 Hz and 0.5 Hz, while at 10 kpps the sawtooth frequency will range from 0 Hz
 to 5 kHz.
 
 Noise at less than 0.5 Hz is not easy to filter, so you are going to require
 the correction from the receiver and/or an integrator with a time constant
 that can only be realized digitally.  Sawtooth noise over most of a 0 Hz to
 5 kHz range, on the other hand, should be eliminated by the analog low pass
 filter after the phase detector in the PLL, giving you something nice and 
 clean
 coming out.  It is only if you get unlucky and the beat frequency between GPS
 time and the receiver's oscillator ends up very close to an integer multiple 
 of
 10 kHz that you'll see noise at a low enough frequency to leak through into 
 the
 control response.
 
 This is interesting because it suggests that very simple GPSDOs using 10 kHz
 from the receiver might at times work worse than you are likely to observe in
 a single bench measurement as aging (or something) moves the receiver's 
 oscillator
 frequency through one of the bad frequency errors.  Or is there a way to 
 avoid
 that altogether (maybe if the receiver does dithering)?
 
 Dennis Ferguson
   
 Instead of speculating try reading the specifications.
 1Hz phase modulation of the 10kHz output is present.
 The receiver sawtooth error sample rate is 1Hz not 10kHz.
 The 10kHz output signal phase is adjusted at a 1Hz rate by the receiver.

Bruce,

I'm not sure which equipment you want me to read the specifications for, though
I'd be very interested in knowing.  What I'm describing is the behaviour of the
timepulse output of the LEA-6T, which can be configured to output edges at any
rate from 1 Hz to 10 MHz.  There the only relevant specification I see is the
timepulse output quantization error, which is a constant 21 ns on every output 
edge
independent of the rate at which the receiver is configured to generate edges.  
This
should cause exactly the behaviour described above, and as best I can measure by
comparing 1 pps and 50 pps outputs to the divided-down 10 MHz output of a GPSDO 
is
consistent with how the receiver actually behaves (though my best measurement 
is none
too good; I need to get a TIC with a resolution better than the 10 ns my 
Beaglebone
has).  If you run the output at 10 kpps you get 10,000 samples of the 
quantization
error every second and can average it out a lot faster than if you only get one 
sample
of the quantization error every second.  I don't know what a sawtooth error 
sample
rate is if not this.

You seem to be describing a piece of equipment where the sawtooth error is not a
simple consequence of pulse output quantization caused by generating edges with
the receiver's internal free-running clock.  I'd be really curious to know what
equipment this is.  This page

http://gpsdo.i2phd.com

says he looked for but failed to find any sub-Hz sidebands in the Navman Jupiter
10 kHz output so it doesn't seem like that receiver is the one you are thinking 
of
either.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS W/10KHz

2014-02-10 Thread Dennis Ferguson

On 10 Feb, 2014, at 00:48 , Bruce Griffiths bruce.griffi...@xtra.co.nz wrote:
 Instead of speculating try reading the specifications.
 1Hz phase modulation of the 10kHz output is present.
 The receiver sawtooth error sample rate is 1Hz not 10kHz.
 The 10kHz output signal phase is adjusted at a 1Hz rate by the receiver.

Ah, as soon as I pressed send for the last note I realized what
you were likely telling me.

Yes, the LEA-6T only provides you with a quantization (sawtooth)
correction for 1 pps and no higher rate.  At 1 pps you should
pay attention to the digital correction (implying no analog-only
implementation is possible; you minimally need the delay line
thing) since the frequency of the saw tooth is often low enough
to leak into the control response and the correction should make
the sawtooth go away.

All I was pointing out is that at a higher output frequency, like
10 kpps, the frequency of the quantization saw tooth error will
almost always be much higher as well.  There's no need for the digital
correction since averaging over a relatively short period, like in
the loop filter of an appropriate analog PLL, will almost always be
sufficient to smooth the sawtooth.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS W/10KHz

2014-02-10 Thread Dennis Ferguson
Bjorn,

 All I was pointing out is that at a higher output frequency, like
 10 kpps, the frequency of the quantization saw tooth error will
 almost always be much higher as well.  There's no need for the digital
 correction since averaging over a relatively short period, like in
 the loop filter of an appropriate analog PLL, will almost always be
 sufficient to smooth the sawtooth.
 
 The sawtooth correction is the difference between where the receiver would
 wish to place the edge and where its known limited resolution electronics
 lets it put the edge.

Yes, exactly.  The range of the error in edge placement will be a constant
related to (probably equal to) the period of the internal clock it is using
to generate the edges.  For an LEA-6T this is 21 ns so, assuming it rounds
off, any edge it places will be in error by +/- 10.5 ns.

 The receiver wish is based on the timesolution from the last measurement.
 In the Jupiter this is done at 1Hz maximum. The sawtooth correction will
 apply the same for all 10k (pos or neg) edges in the 10kHz signal during
 that one second.

Yes, this is true.  Note, however, that the time solution produces not
only a phase error of the receiver's internal clock with respect to GPS
but also the frequency error of the receiver's clock with respect to GPS.
More than this, since the time solution takes time to compute it will be
telling the receiver what the phase error was at some point in the past
rather than what it is now, let alone what it will at the point in the
future when you want to assert a 1 pps signal.  It can place that future
edge because it knows the actual frequency of its clock with some precision
from the time solution, and that plus knowing a phase offset at some past
time is sufficient to allow it to extrapolate to a future edge placement.
With an LEA-6T the precision of the edge placement will be +/- (10.5 +
epsilon) ns, with the epsilon occurring because it is extrapolating a
past measurement to place a future edge.

Note, however, that the rate at which it can compute time solutions doesn't
change any of this very much.  The fact that an LEA-6 can compute 5 solutions
per second rather than just one will at best just make epsilon a bit
smaller, and this matters not at all since epsilon should be pretty small
already.  If the receiver instead only computed a time solution once every
3 seconds it also wouldn't make a difference, it could still place a 1 pps
edge every second by extrapolating from whatever the last solution was
that it managed to complete.  More than this, if you told the receiver to
place 10,000 edges per second instead of just 1, the placement error of
each one of those edges, individually, would still be +/- (10.5 + epsilon) ns.

The rate at which the receiver computes new solutions has about epsilon
to do with the precision of edge placement.  The sawtooth doesn't come from
the epsilon, it comes from the +/- 10.5 ns.

 There are effects that are not easily filtered away in the analog domain.
 See the archives and
 
http://www.leapsecond.com/pages/m12/sawtooth.htm

This is good.  Notice the amplitude and the frequency of those sawtooths.
The amplitude is the period of the internal clock placing the edges, i.e.
21 ns for an LEA-6T and what looks like 30 ns for the receiver above.

Then there's the frequency.  It varies widely but the highest frequency
seen, in the 4th graph down, looks to be about 0.5 Hz.  It isn't an
accident that there is no higher frequency, and  I'll just assert
that this maximum frequency has nothing to do with the time solution
update rate of the receiver.  It would be not change if you looked at
the 1 pps output of a 5-update-per-second LEA-6.  Instead the highest
sawtooth frequency is 0.5 Hz because he's looking at a 1 pps output,
getting one sample per second, and if you sample a signal at one sample
per second then the frequencies you see in the samples are always going
to be in the range 0-0.5 Hz.  Essentially this is integrating a beat
frequency between the receiver's oscillator and GPS time, which could be
very high in frequency, but by sampling at 1 pps the difference, whatever
it is, gets folded into the 0-0.5 Hz Nyquist bandwidth.  The low frequency
of the sawtooth observed at 1 pps makes it a problem for analog filters.

All I'm pointing out, then, is that if you increase the pulse rate output
by the receiver from 1 pps to 10 kpps you will still get a sawtooth, like
1 pps, and the amplitude of the sawtooth will be unchanged from 1 pps, but
the frequency of the sawtooth won't be limited to 0-0.5 Hz and will instead be
folded into the 10 kpps Nyquist bandwidth of 0 - 5 kHz.  Unless you are
very unlucky this will give you the same sawtooth error at a much higher
frequency, making it much more amenable to analog domain filtering.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo

Re: [time-nuts] GPS W/10KHz

2014-02-09 Thread Dennis Ferguson

On 8 Feb, 2014, at 14:50 , ewkeh...@aol.com wrote:
 The problem with the PLL analog version is the same as with any digital  
 GPSDO. The saw tooth is present at 10 KHz just like 1 Hz. To the best of my  
 knowledge there is no GPS receivers out there for less than $ 1000 with out 
 saw  tooth. Timing receivers output the correction value and you can either 
 with  software or a variable delay do correction.

This is very true, though the sawtooth at a 10 kpps sample rate is going
to a little different than the sawtooth at a 1 pps sample rate.  The frequency
of the sawtooth noise will lie somewhere in the Nyquist bandwidth.  At a 1 pps
sample rate the frequency of the sawtooth noise will hence be somewhere between
0 Hz and 0.5 Hz, while at 10 kpps the sawtooth frequency will range from 0 Hz
to 5 kHz.

Noise at less than 0.5 Hz is not easy to filter, so you are going to require
the correction from the receiver and/or an integrator with a time constant
that can only be realized digitally.  Sawtooth noise over most of a 0 Hz to
5 kHz range, on the other hand, should be eliminated by the analog low pass
filter after the phase detector in the PLL, giving you something nice and clean
coming out.  It is only if you get unlucky and the beat frequency between GPS
time and the receiver's oscillator ends up very close to an integer multiple of
10 kHz that you'll see noise at a low enough frequency to leak through into the
control response.

This is interesting because it suggests that very simple GPSDOs using 10 kHz
from the receiver might at times work worse than you are likely to observe in
a single bench measurement as aging (or something) moves the receiver's 
oscillator
frequency through one of the bad frequency errors.  Or is there a way to avoid
that altogether (maybe if the receiver does dithering)?

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Arduino Frequency Accuracy

2014-01-19 Thread Dennis Ferguson

On 18 Jan, 2014, at 20:11 , Jim Lux jim...@earthlink.net wrote:
 Just in case you want to build a clock with an Arduino..
 
 http://jorisvr.nl/arduino_frequency.html
 
 ADEV measurements, etc.
 
 
 take home message.. absolute accuracy is a few kHz out of 16 MHz... probably 
 a 100 ppm crystal.
 
 On some Arduinos (or Teensy3's which is what I use) there's a provision for a 
 32kHz clock crystal.. that might be a bit better as a time base.

That's a sloppy crystal if what you are looking for is an oscillator
with an output frequency which is very close to the number written
on the crystal's package, but it might not be so bad if you measure
its frequency under actual operating conditions and use the
calibrated value instead.

I think the ADEV of the crystal is in fact rather good judged by PC
standards.  I interpret the floor of near 10^-9 at 100 seconds as
meaning some (mythical?) optimal synchronization software might
keep a clock based on that within 100 ns of the GPS receiver at
an adjustment rate of about one every 100 seconds.  This is quite
good compared to other hardware I've recently been looking at.

The ceramic resonator, on the other hand, is pretty awful.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] L1/L2 GPS Receiver

2014-01-17 Thread Dennis Ferguson

On 17 Jan, 2014, at 11:43 , Michael Perrett mkperr...@gmail.com wrote:
 Magnus, I believe that he is referencing the the new L2 C/A code, which is
 not protected. Reference
 http://www.gps.gov/systems/gps/modernization/civilsignals/

It would be nice to have a receiver for that when they turn it on, but I
don't think that's what he wants.  The observables used for PPP processing
are L1 and L2 carrier phase.  You don't need a receiver capable of decoding
the P(Y) code but you do need a receiver capable of receiving the full
bandwidth of its carrier on both L1 and L2 and tracking the phase.

The commercial receivers which do this seem to cost dearly.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Loran

2013-11-15 Thread Dennis Ferguson

On 15 Nov, 2013, at 19:12 , Bob Camp li...@rtty.us wrote:
 We probably could agree on Seneca NY since that’s about equal distance to the 
 pair of us. 
 
 Does anybody know the proposed ERP on the new system? Some of the master’s on 
 the China chains are pretty high power if I remember correctly. 

This powerpoint presentation

http://www.tinyurl.com/l2humtb

says the NL40 transmitter they just bought is 300 kW.

I thought a lot of the Asian chains, including China, used
Megapulse equipment like the US.  I think Megapulse did use
to say their transmitters were multi-Megawatt, but I can't
check that since

http://www.megapulse.com

now goes someplace else.

I wonder whether the last bit means that Megapulse is now out
of the transmitter business for good, or if Ursanav's infatuation
for the Nautel transmitters is just a passing fancy while they
complete their vertically integrated monopoly.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Time stamping with a PICPET

2013-10-27 Thread Dennis Ferguson

On 26 Oct, 2013, at 23:53 , Hal Murray hmur...@megapathdsl.net wrote:
 dennis.c.fergu...@gmail.com said:
 That's perfect if it works like it seems it should.  The problem with modern
 CPUs is finding an instruction sequence that does the read-write-read in
 that order, allowing each to complete before doing the next.  The write is
 the biggest problem.  Writes are often buffered, and even when you can find
 an instruction which stalls until it clears ...
 
 I'm far from a wizard in this area, but I used to work with people who were.
 
 The rules for things like PCI cover that case.  If you do something like 
 write to a register to clear an interrupt request, you have to follow it by a 
 read to that register or one close to it.  As you hippity-hop through bridges 
 and such, the read gets trapped behind the write and doesn't happen until the 
 write finishes.
 
 When using the CPU cycle counter as a system clock source it is common to
 find that the two reads in a read-write-read sequence are only a cycle or
 two different even when you know the write is crossing an interconnect with
 10's of nanoseconds of latency (not that 10's of nanoseconds is bad...). 
 
 That's reasonable if the read-write-read were to cycle-counter, 
 someplace-else, and cycle-counter.  The write has been started.  It's in the 
 piepline, but you haven't told the memory system that you need it to finish.
 
 Try read-write-read-read where the outer reads are to the cycle counter and 
 the inner write-read both go to the same IO device.

Note that you've turned a read-write-read into a read-read-read with an 
additional
write wart.  As I mentioned you can often find instructions to do a 
read-read-read
correctly, so this will likely work too.

Putting some numbers to this might help get a handle on the cost, though.  One
reason for doing the before- and after- reads is to get a measurement of the
ambiguity of the sample (which also provides a basis for filtering damaged 
samples).
Cycle counter reads hardly cost anything but on a 166 MHz, 64 bit PCI-X bus, the
last, highest performance PCI bus that was a real bus (PCIe is a packet protocol
running on a network of point-to-point links) a single register read takes about
74 ns to complete.  I'll guess the write adds about 40 ns to that.  Since the
write increases the ambiguity from +/- 37 ns (i.e. read-read-read only) to +/- 
57 ns,
finding a way to do it with a read-read-read alone provides a useful 
improvement.
For PCIe the write is probably cheaper but the read is likely to be even more 
expensive
due to the packetization and (de)serialization logic that bus requires.

It is the case, however, that if you do a naive implementation of the 
read-read-read,
or the read-write-read-read, you may end up finding the first and last read of
the cycle counter are still only a few cycles apart.  The reason is that while
most modern CPUs will execute the instructions more-or-less in order (most will
do 2 instructions per cycle if they can now, so the order may not be exact) the
CPU won't have a reason to actually wait for the 74 ns it takes that middle read
to complete and will go barrelling along executing additional instructions until
it finds something that actually uses the result it hasn't got yet.  The cycle
counter read doesn't depend on the previous read so there's no reason to wait.

To get these operations serialized correctly you need to find the magic
instructions that force that to happen.  On a recent x86 you might be able
to use the serializing rdtscp instructions, on older ones you might need
to separate the operations with cpuid instructions.  On other CPUs it could
be barrier instructions or something else entirely.  A posted write may
only be serializable by adding an extraneous and expensive read to the
same device after it, as you suggest.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Time stamping with a PICPET

2013-10-26 Thread Dennis Ferguson

On 26 Oct, 2013, at 18:21 , Tom Van Baak t...@leapsecond.com wrote:
 Right. The key is not to use count-down timers and interrupts at all. The key 
 is no interrupts; nothing asynchronous. What you do is:
 
 1) About once a second (it doesn't really matter), with hot cache and 
 interrupts disabled, record the s/w time before and after you output a pulse. 
 If your PC/SBC has a low-latency DTR or GPIO pin code path, you're golden.

That's perfect if it works like it seems it should.  The problem with modern 
CPUs is
finding an instruction sequence that does the read-write-read in that order, 
allowing
each to complete before doing the next.  The write is the biggest problem.  
Writes are
often buffered, and even when you can find an instruction which stalls until it 
clears
the buffer the write will also often be posted across the interconnect bus so 
there's
no way for the CPU to know when the write makes it to the device, let alone 
make it
wait until that happens.

When using the CPU cycle counter as a system clock source it is common to find 
that
the two reads in a read-write-read sequence are only a cycle or two different 
even
when you know the write is crossing an interconnect with 10's of nanoseconds of 
latency
(not that 10's of nanoseconds is bad...).

It is usually easier to find the magic instructions to make a read-read-read 
work
the way one expects, though even that can be a challenge.  It is possible to do
the same output pulse thing with a read-read-read if there is a PWM peripheral 
to
generate the pulses.  The PWM is programmed to output pulses at whatever 
frequency
is convenient while the read-read-read sampling is used to determine the 
relationship
between the PWM counter and the system clock.  Of course, this requires a 
peripheral
which legacy PCs often don't have.


 If the CPU/PC/SBC has h/w counter/capture latches, you're all set. Then 
 there's no jitter and NTP should be as accurate as the ADEV(tau 1s) of the LO 
 that clocks the CPU and the ADEV(tau) of the external (GPS) 1PPS source.
 
 But h/w counter/capture is something no legacy PC has had AFAIK. If the new 
 breed of SBC have this capability, NTP should show a one or two orders of 
 magnitude jump in precision on those platforms.

The TI CPU used for the Beaglebone (Black) has three.  The counter being sampled
is 100 MHz.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops (WAAS)

2013-07-11 Thread Dennis Ferguson

On 10 Jul, 2013, at 14:08 , David I. Emery d...@dieconsulting.com wrote:
 It seems completely inconceivable to me that either the antenna
 system (particularly feeds) or transponder RF hardware on any commercial
 Ku or C or Ka or X band satellite could possibly be frequency agile
 enough to tune to 1575.42 MHz unless it was purpose designed to radiate
 on that frequency from the start.
 
   So any hosted WAAS payload is completely application specific.

If you look at the pictures here

http://www.orbital.com/NewsInfo/Publications/Galaxy_Fact.pdf

the satellite on the right has things sticking out the bottom, in the
back corner, that are missing on the others and that look a lot like
the antennas on GPS satellites.  The WAAS satellite is also 350 pounds
heavier than the other two even though the C-band payload is identical
on all three, so it seems like there could be a fair amount of extra
stuff added for WAAS support.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops

2013-07-05 Thread Dennis Ferguson

On 5 Jul, 2013, at 09:18 , Jim Lux jim...@earthlink.net wrote:
 I believe that the original WAAS repurposed transponders intended for other 
 L-band satellite signals (e.g. Sirius/XM/LightSquared).

I'm not sure.  The original WAAS satellites, I think these

http://nssdc.gsfc.nasa.gov/nmc/spacecraftDisplay.do?id=1996-070A
http://nssdc.gsfc.nasa.gov/nmc/spacecraftDisplay.do?id=1997-027A

indeed were commercial L-band satellites, as you suggest, but they still
note that

   Each INMARSAT-3 also carried a navigation transponder designed to
   enhance the accuracy, availability and integrity of the GPS and
   Glonass satellite navigation systems.

which leaves the impression that particular bit of hardware might be
special-purpose.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops

2013-07-05 Thread Dennis Ferguson

On 5 Jul, 2013, at 08:33 , Bob Camp li...@rtty.us wrote:
 The sat needs to transmit at the GPS frequencies and have an uplink that 
 works exclusively with those frequencies. (or at least that sub band). A 
 normal transponder probably would not radiate at the GPS allocation, simply 
 to be a good citizen. I believe the specialization is simply a frequency 
 mod to allow WAAS to pass through. There is no mention of a space qualified 
 Cs and / or Rb flying on those birds and no indication that the ground 
 segment is controlling such a payload. If all that *was* present, then 
 including them in the normal navigation solutions would be a zero cost next 
 step. 

Addressing the last sentence I found a government WAAS reference
which indicates that the WAAS satellites are indeed interchangeable
with GPS satellites in navigation solutions.  It is is on page 7 of
this

http://www.gps.gov/technical/ps/2008-WAAS-performance-standard.pdf

where it says

The WAAS GEO broadcast also provides an additional ranging source
for improved availability of navigation services. When a WAAS receiver
is using the corrections and integrity messages broadcast by the GEO,
only four GPS or GEO satellites are needed, which increases the
availability of service versus RAIM or RAIM/FDE.

While what is or isn't required in the satellite to support this is
still a mystery it seems like the timing accuracy coming back must
end up being equivalent to a real GPS satellite.  What this is good
for is interesting to think about.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops

2013-07-04 Thread Dennis Ferguson

On 3 Jul, 2013, at 21:05 , Bob Camp li...@rtty.us wrote:
 If the WAAS sats were purpose designed to provide a high accuracy carrier, 
 then yes there are ways to do it. The fundamental design concept of a bent 
 pipe is that you don't do any of that. You do not care what's going through 
 the bird, it just maps the input frequencies to the output and amplifies them 
 (a lot). Again, the WAAS signal is simply piggybacking on existing hardware. 
 The conversion oscillator is not locked to the GPS carrier (or to any other 
 carrier). It's simply a free running quartz based oscillator, running into a 
 synthesizer to get the appropriate microwave frequency. 

I'm not sure about the Again, ... part.  All three WAAS satellites are 
commercial
satellites but they were all launched recently enough (2 in 2005, 1 in 2008) to 
have
had WAAS-specific payload added.  The solicitation for the 2008 satellite is 
here

   
https://www.fbo.gov/index?s=opportunitymode=formid=f5aacd4bba2ef67b0c59b586900499b6tab=core_cview=1

and is dated 2002; this isn't looking for service on a satellite already in 
orbit.  For
the 2005 satellites, the Telesat one is mentioned here

   http://www.telesat.com/services/government-services

which says

Telesat’s Anik F1R includes a specialized payload for the Wide Area 
Augmentation
System

while you look at the Orbital Sciences blurb on the last three satellites it 
built for
PanAmSat, here

   http://www.orbital.com/newsinfo/publications/galaxy_fact.pdf

you'll see that they are all exclusively satellite TV things, with 24 active
C-band transponders and 8 spares, except for Galaxy 15 which weighs 350 pounds
more than the other two and about which it says:

The Galaxy 15 satellite, which features a unique hybrid payload
configuration, was launched on October 13, 2005. In addition to C-band
commercial communications, the spacecraft also broadcasts Global
Positioning System (GPS) navigation data using L-band frequencies as
part of the Geostationary Communications and Control Segment (GCCS)
implemented by Lockheed Martin for the U.S. Federal Aviation
Administration (FAA).

I don't think they can use any old satellite for WAAS, they added payload
for it.  Note that when Galaxy 15 went awol it took the WAAS service with it
for most of a year even though it was replaced in its orbital slot for TV 
service
by a spare within a week or so (though Wikipedia says the replacement was 
Galaxy 12
so I guess that's predictable from the blurb above).

So I've been assuming that while the WAAS satellites are commercial the WAAS
transmitters are specialized to the service and included for its exclusive use.
I hence guess they could have been designed to work however they needed to.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops

2013-07-03 Thread Dennis Ferguson

On 3 Jul, 2013, at 10:48 , Attila Kinali att...@kinali.ch wrote:

 On Wed, 3 Jul 2013 08:29:02 -0400
 Bob Camp li...@rtty.us wrote:
 
 There are two batches of GPS / WAAS sats up there:
 
 1) The ones with numbers above 100 that are geosync and that only do WAAS
 
 2) The ones with numbers = 32 that do nav. These are not geosync. 
 
 I believe the only ones with corrected / high stab clocks on board are
 those in the second group. The stuff in the first group aren't dedicated
 sats, just leased transponders on conventional multipurpose geosync birds. 
 
 I don't know about WAAS, but AFAIK the EGNOS signals are generated on
 ground using Cs references and retransmitted by the satelites using
 a bend pipe. Ie. the signals should be of time-nut quality even without
 high accuracy frequency standards in the birds themselves.
 
 (Sorry, i'm not able to find where i read about that, so no references today)

I have also read that WAAS satellites can be usefully included in the GPS
solution, so they aren't necessarily inferior, but I also don't have
a reference.  There is this:

http://tf.boulder.nist.gov/general/pdf/2299.pdf

The clocks are indeed ground based and good quality.  The advantage of using
them as an alternative to GPS CV (which is what the paper is about) is that
they transmit unencrypted code on two frequencies to allow computing ionospheric
corrections and they don't move (much) so you can track them continuously with
a dish to get a big signal-to-noise improvement and multipath insensitivity.  
The
last bit seems like a mixed blessing, though, since the dish means you depend on
only the one satellite it is pointed at and hence suffer from whatever bad 
things
happen to it.  The paper notes events that it characterises as an increasing 
problem
with the broadcast WAAS ephemeris, followed by an outage and clock jump, which 
I
interpret as maybe being an adjustment made to the satellite orbit which can't
be represented properly in the ephemeris.  I assume that could happen with 
regular GPS
satellites too, but if you are tracking a lot of them at once it is easy to 
detect
and toss out a solution outlier.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Speaking of Costas loops

2013-07-03 Thread Dennis Ferguson

On 3 Jul, 2013, at 11:47 , Bob Camp li...@rtty.us wrote:
 The pipe in this case is up on one frequency and down on another. The 
 conversion oscillator on satellite that's the weak link, no matter how good 
 the signal from the ground happens to be. 

That's certainly true but it doesn't seem like a problem that the
presence of a high stability free-running oscillator, like a rubidium,
would help.  The oscillator on a geostationary satellite has a
continuous frequency reference to lock to (the uplink carrier) and
hence only needs short term stability sufficient to track this and
transfer it accurately to the downlink.  It seems like this is the
kind of problem that quartz excels at.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] BPSK decoder for WWVB

2013-07-03 Thread Dennis Ferguson

On 3 Jul, 2013, at 14:03 , Tim Shoppa tsho...@gmail.com wrote:
 I have also heard YVTO on 5MHz underneath both WWV and WWVH, strangely
 off-kilter by half a second or so.

When BPM does that, like maybe at the beginning of this

http://www.youtube.com/watch?v=WRaRB-x84xg

I think it is because they are transmitting UT1 pips rather
than UTC pips.  I assume this might be convenient for celestial
navigation users, though I can't imagine that there are a whole
lot of those left.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] looking for low-power system for gps ntp timekeeping

2013-06-30 Thread Dennis Ferguson

On 30 Jun, 2013, at 08:50 , David J Taylor david-tay...@blueyonder.co.uk 
wrote:
 From: Attila Kinali
 []
 Oh.. and if you want to go the linux way and use a Raspberry Pi.. just dont!
 Use a Beaglebone black instead. It uses less power and is easier to deal with.
 Not to mention that you dont have all those USB related problems.
 []
 Attila Kinali
 ===
 
 I've built three Raspberry Pi stratum-1 NTP servers:
 
 http://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html
 
 one of which has both a Wi-Fi dongle and a DVB TV receiver stick attached, 
 and on none of these have I seen any USB problems.  I'm using a 5.25 V 2A 
 power supply from ModMyPi.com.  You can view the timekeeping accuracy here:
 
 http://www.satsignal.eu/mrtg/performance_ntp.php

You aren't necessarily showing the part where the Raspberry Pi is a bit
weak, though.  How well do clients which receive their time via the
USB ethernet interface do?

The Beaglebone Black has about three advantages going for it in this
application:

- The ARM CPU is about twice as fast as the Raspberry Pi's for about
  the same power consumption (I'm not sure this is a particular advantage
  for NTP, however, so I won't count it).

- The Ethernet MAC core is built into the SOIC, and tightly coupled to it,
  so packet traffic doesn't have to sit waiting for the USB scheduler to
  get around to doing something with it.

- The Ethernet MAC core also provides fairly good, complete IEEE1588
  support.  This is not of direct use to NTP but does provide a way to
  calibrate the software timestamps which NTP produces and consumes to
  better match when the packets arrive from and are transmitted by the actual
  hardware.  I.e. you can measure the typical difference between hardware
  and software inbound timestamps (measuring interrupt latency), and hardware
  and software outbound timestamps (measuring the processing time spent in the
  outbound network stack) for PTP UDP packets, and then use these results
  to improve the symmetry of software timestamps for NTP UDP packets.  There
  is no way I know of to measure this without the IEEE 1588 support (and the
  outbound number in particular is often big enough to deserve correction).

- The TI SOIC also has a hardware timestamp capture peripheral (look for
  eCap in the documentation) which can capture PPS edge times with
  single-digit-nanosecond accuracy.  That's a couple of orders of magnitude
  better than interrupt sampling and eliminates the jitter of the latter
  measurements.

For a $5-$10 difference in price for the board I think these are worth it.
The RPI makes a fine, low-power replacement for Intel hardware for this,
but the Beaglebone Black has the raw material to do significantly better at
this than either of them.  The only problem with the Beaglebone is that it
is not as popular as the Raspberry Pi, so making use of the former is going
to require one to do more work on one's own to take advantage of it.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] clock-block any need ?

2013-01-01 Thread Dennis Ferguson
so, with the quality of the result depending on the population and
nature of variability of the cluster but hardly at all on the
outliers, and with the lack of a measurable cluster telling you
when you might be better off relying on your local clock rather
than the network.  The approach relies on the things we do know
about networks and networking equipment while avoiding reliance on
things we can't know: it mostly avoids making gaussian statistical
assumptions about distributions that may not be gaussian.  The field
of robust statistics provides tools addressing this which might
be of use.

I guess it is worth completing this by mentioning what it
says about ntpd.  First, ntpd knows all of the above, probably
much, much better than I do, though it might not put it in
quite the same terms.  If you make the assumption that the
stochastic delays experienced by samples are evenly distributed
between the outbound and inbound paths (this is not a good match
for the real world, by the way, but there are constraints...) then
round trip delay becomes a stand-in measure of cluster, and ntpd
does what it can with this.  The fundamental constraint that limits
what ntpd can do, in a couple of ways, is the fact that the final
stage of its filter is a PLL.  The integrator in a PLL assumes
that the errors in the samples it is being fed are zero-mean and
normally distributed, and will fail to arrive at a correct answer if
this is not the case, so if you want to filter samples for which
this is unlikely to be the case you need to do it before they get
to the PLL.  The problem with doing this well, however, is that a
PLL is also destabilised by adding delays to its feedback path,
causing errors of a different nature, so anything done before the
PLL is severely limited in the amount of time it can spend doing
that, and hence the number of samples it can look at to do that.
Doing better probably requires replacing the PLL; the replace
it with what? question is truly interesting.

I suspect I've gone well off topic for this list, however, and for
that I apologize.  I just wanted to make sure it was understood that
there is an argument for the view that we do not yet know of any
fundamental limits on the precision that NTP, or a network time
protocol like NTP, might achieve, so any effort to build NTP servers
and clients which can make their measurements more precisely is not
a waste of time.  It instead is what is required to make progress
in understanding how to do this better.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] clock-block any need ?

2012-12-28 Thread Dennis Ferguson

On 27 Dec, 2012, at 11:28 , Attila Kinali att...@kinali.ch wrote:
 On Thu, 27 Dec 2012 10:55:12 -0800
 Dennis Ferguson dennis.c.fergu...@gmail.com wrote:
 
 I don't think I buy this.  It takes 70 milliseconds for a signal
 transmitted from a GPS satellite to be received on the ground, but
 we don't use this fact to argue that sub-70 ms timing from GPS is
 not possible.  If you have a network of high-bandwidth routers and
 switches doing forwarding in hardware, and carrying no traffic other
 than the packets you are timing (I have access to lab setups that
 can meet this description) you can observe packet delivery times that
 are stable at well under the microsecond level even though the total
 time required to deliver a packet is much larger.
 
 I'm not sure about this. Knowing about how switches work internally,
 i'd guess they have jitter of something in the range of 1-10us, but
 i've never done any measurements. Have you any hard numbers?

I've measured it for large routers, but the numbers are not mine.  In
a former life I helped design forwarding path ASICs.

I'm interested in what that guess is based on, however, since I can't
imagine where 1-10us of self-generated jitter from an ethernet switch
would come from, if not from competing traffic.  A well-spec'd piece of
silicon to handle 20 Gbps of full-duplex bandwidth needs to be capable
of processing about 40 million packet arrivals per second, or about
one packet every 25 ns.  That's pretty much what is needed to build
a good ~$200, 24 port gigabit ethernet switch. The cheapest hardware
forwarding path to implement, which is generally what you'll find in
there, is a fixed processing pipeline (or pipelines) that takes packets
in at the required rate and spits out the results at that rate delayed by
N chip clock cycles; N might be large (but not too large; N tells you
how many packets it needs to be able to have in process simultaneously
and it is cheaper in logic if you can minimize that number) but it is a
constant.  Your jitter estimate implies that such a switch, even when
not occupied with other traffic, will either sometimes leave a packet
sitting around for between 40 and 400 packet arrival times before getting
around to doing something with it, or else will sometimes do between 40
and 400 packet arrival times worth of extra work to forward the thing.
My experience with this suggests that it is actually easier to build if
it doesn't work like that.  The switch I recently bought for my house,
this one

  
http://www.netgear.com/business/products/switches/prosafe-plus-switches/JGS524E.aspx#

specifies the total latency (that's total time, not jitter) through the
switch at 4.1 us for 64 byte packets, a precision I expect they
arrived at by just adding up the store-and-forward and fixed pipeline
delays.  Nearly all of the variation in delay is from competing traffic

Even if 1-10us was observed for individual samples, however, that is
still missing the point.  The interesting number is not the variability
of individual samples, it is the stability of the measure of central
tendency derived from many such samples (e.g. the average, if the
variation were gaussian) that is the interesting number.

 If you add competing
 traffic, like real life networks, the packet-to-packet variability
 becomes much worse, but this is sample noise that can be addressed
 by taking larger numbers of samples and filtering based on the expected
 statistics of that noise.
 
 Here lies the big problem. While with GPS we pretty much know what
 the time is that the signal takes to reach earth, we have no clue
 with network packets in a loaded network. We don't even have an
 idea what the packet transmit distribution is in the moment we are
 doing our measurements. Neither the queue length in the router/switch
 nor anything else. And the loading of a switch changes momentarily
 and this in turn changes the delay of our packets. You can of course
 apply math and try to get rid of quite a bit of noise, but you will
 never get rid of it down to ns levels.

?? NTP is a two-way time transfer.  We directly measure how long the
cumulative queue lengths are for the round trip for each sample, and we
hence directly measure how this changes from sample to sample.  There are
also good statistical models for the average behaviour of such queues when
operating at traffic levels where packet losses are rare and where the
bandwidth is not being significantly consumed by a small number of large,
correlated, flows, which is the usual operating state for both local
networks and Internet backbones (it is usually access circuits that are
the problem) and there are heuristics one can use to determine when the
statistics are not likely to be so nice; these are of use when designing
the thing which has the queues.  What we haven't had is hosts and servers
capable of making precise measurements either of packet arrivals and
departures (why is a ping round trip reported to be 200 us or 400 us
when

Re: [time-nuts] clock-block any need ?

2012-12-27 Thread Dennis Ferguson

On 27 Dec, 2012, at 08:05 , Chris Albertson albertson.ch...@gmail.com wrote:
 You do not need to use something like the Clock-Block to build a very good 
 NTP server, but if you want to build the *ultimate* server it is part of the 
 mix.
 
 Yes this is true.  The server can be very good, meaning that if it
 were better the clients that it servers could not know the
 difference.  A simple is if a wall clock moved the hands with
 millisecond precision, it would not serve the clients (human eyeballs)
 any better if it moved with nanosecond precision because human
 perception is measured in mS not nS.  Same with the time server, it
 communicates with its clients over a network that has someuncertainy
 in th delay and ultra-presision is lost.   So nanosecond level
 timekeeping in the server is not required.   You can do uSec level
 time keeping with the standard TTL can on most mother boards.
 However this list is for nuts and you might think it is fun to try
 and do 1000 times better time keeping than is needed, in that case you
 will need some kind of specialized clock hardware.

I don't think I buy this.  It takes 70 milliseconds for a signal
transmitted from a GPS satellite to be received on the ground, but
we don't use this fact to argue that sub-70 ms timing from GPS is
not possible.  If you have a network of high-bandwidth routers and
switches doing forwarding in hardware, and carrying no traffic other
than the packets you are timing (I have access to lab setups that
can meet this description) you can observe packet delivery times that
are stable at well under the microsecond level even though the total
time required to deliver a packet is much larger.  If you add competing
traffic, like real life networks, the packet-to-packet variability
becomes much worse, but this is sample noise that can be addressed
by taking larger numbers of samples and filtering based on the expected
statistics of that noise.  That is, the level of noise effecting
each individual sample entering the filter does not alone predict
the noise level of the result coming out, the latter also depends on the
number of samples and the quality of the model of the noise employed by
the filter.  Note that I often see claims of time synchronization with
PTP at the 10 ns level or better.  As this level of synchronization is
usually achieved by the brute force method of measuring transit times
across every network device on the path from source to destination I
have no doubt that what NTP can do will necessarily be worse than this,
but I don't know of a basis that would predict whether NTP's worse
is necessarily going to be 10,000x worse or can be just 10x worse.
Knowing that would require actually trying it to measure what can be
done.

What is certain, however, that if you want to measure this at the levels
that might be possible you probably want nanosecond-level clock hardware
in both the server, so that it can produce time of this quality, and in
the clients, so that you can measure how well they do directly rather
than attempting to have the NTP implementation grade its own homework.  I
don't think this is a waste of time at all.  The best case is that the
ability to measure at this level would lead to an understanding of what
it would take to transfer time with NTP at this level, but even the worst
case would be that one would be able to support one's assertions about what
can't usefully be done with data, and that's not bad either.  If no one
tries then no one will ever know.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Cell timing error

2012-12-15 Thread Dennis Ferguson
GSM cell sites in the US have GPS because it is required to
support E911 positioning.  I'm not sure if it is used for anything
other than this, but it doesn't have to be.

In some other parts of the world it has been considered bad taste
to let the operation of telecommunications infrastructure become
dependent on a facility owned by the US military, so the standards
that are popular there often try to avoid that.

Dennis Ferguson

On 15 Dec, 2012, at 18:59 , li...@lazygranch.com wrote:
 I can assure you the GSM shacks have GPS timing in them. I can dig up the 
 photos if you want.
 
 -Original Message-
 From: Joseph Orsak jor...@nc.rr.com
 Sender: time-nuts-boun...@febo.com
 Date: Sat, 15 Dec 2012 18:24:20 
 To: Discussion of precise time and frequency measurementtime-nuts@febo.com
 Reply-To: Discussion of precise time and frequency measurement
   time-nuts@febo.com
 Subject: Re: [time-nuts] Cell timing error
 
 ATT uses UMTS in most areas which is a self-synchronizing modulation 
 scheme. Supposedly one of the selling points is no dependence on GPS. All 
 the extra sync channels and sync messaging is a capacity hog, not a very 
 spectrally efficient standard in my opinion.
 
 About 85 maximum simultaneous voice calls in a 5Mhz UL / 5 Mhz DL 
 sector/carrier before it starts to fall apart. A big step backwards from 
 good old CDMA2000 (also just my opinion).
 
 But hey, you can surf the web while you talk on the same device.
 
 
 
 -Joe W4WN
 
 
 - Original Message - 
 From: Jim Lux jim...@earthlink.net
 To: Discussion of precise time and frequency measurement 
 time-nuts@febo.com
 Sent: Saturday, December 15, 2012 5:43 PM
 Subject: Re: [time-nuts] Cell timing error
 
 
 On 12/15/12 2:16 PM, Scott McGrath wrote:
 In a prior life we had a CDMA timing receiver for NTP which used VZ for 
 its source
 
 On Dec 15, 2012, at 12:18 PM, Graham / KE9H time...@austin.rr.com 
 wrote:
 
 You should switch to Verizon.
 They are inherently accurate to milliseconds.
 Sub micro-seconds inside the base stations.
 
 
 On 12/15/2012 12:51 PM, Greg Troxel wrote:
 In central mass, ATT and tracfone (? carrier) are showing phone times 
 very close to 1 min slow.  Virgin/sprint is ok.   I've never seen this 
 before - usually it's a few s slow.
 
 
 
 The time *displayed* on the phone might not reflect the time from the 
 network.
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to 
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 
 
 
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Fw: Cell timing error

2012-12-15 Thread Dennis Ferguson

On 15 Dec, 2012, at 21:30 , gary li...@lazygranch.com wrote:
 This is a shot of the GPS timing rack in an ATT shack.
 http://www.lazygranch.com/images/att/att_3.jpg

Yes, the TruePosition box helps compute handset locations for
E911 and whomever else wants to know where your phone is.  This
isn't a unit you would necessarily see in cell closets in other
countries.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Cell timing error

2012-12-15 Thread Dennis Ferguson

On 15 Dec, 2012, at 22:38 , Hal Murray hmur...@megapathdsl.net wrote:

 
 GSM cell sites in the US have GPS because it is required to support E911
 positioning.  I'm not sure if it is used for anything other than this, but
 it doesn't have to be. 
 
 So it's cheaper to install and maintain GPS rather than make one measurement 
 and tell the setup where it is?

E911 requires the carrier to be able to figure out where the handsets
are.  I think GPS is used as a common timing reference so they can
triangulate to locate the phone using time-of-arrival measurements
of the handset's transmissions made at several cell towers.

GSM/UMTS carriers do it this way, at least.  CDMA2000 carriers instead
rely on the handsets to make the time-of-arrival measurements, both
of signals from cell towers and of GPS signals the handset can hear.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS DO Alternatives

2012-12-09 Thread Dennis Ferguson

On 9 Dec, 2012, at 02:14 , Hal Murray hmur...@megapathdsl.net wrote:
 Which brings up another worm for the can.  How are you going to get a PPS 
 from the OCXO?  That's going to be hard without some soldering.  My straw man 
 would be to use the OCXO to clock whatever uP you end up using and generate 
 the PPS with a counter/timer.

I think I'd do that too, and pick a processor with enough of the right
peripherals to avoid having to fart around with an external phase comparison
or interrupt timestamping.

The TI Stellaris LaunchPad, mentioned earlier, is pretty cheap, the
processor can generate its internal clocks directly from a 10 MHz
reference input, and it has a set of 32 bit timer/counters that can either
capture a timestamp for an input edge event or generate a timed outgoing
pulse, both with a resolution of 12.5 ns.  The capture could be used to
time the GPS PPS (in hardware, no external comparator, no interrupt
timestamps) while I think the PWM generation mode could be used to generate
a PPS output synchronous with the 10 MHz oscillator.  The only thing that
is missing for a GPSDO is an internal DAC (it has a bunch of ADCs) so the
10 MHz oscillator would need a digital frequency adjustment for the
processor to be sufficient for the job without any external peripherals.
The processor core is a fully functional ARM, including floating point,
which might make it easier to, say, do a Kalman filter implementation
that wasn't write-only.

The thing is, I still don't find this all that compelling.  Personally
I think it might be better to maybe aim higher on the software value
chain, find a processor board with the same on-chip processor peripherals
as above along with an on-chip Ethernet MAC and enough external peripherals
to run a real operating system and use that to build a combination GPSDO
and very accurate NTP/PTP server.  You generally can't just buy one of
those on eBay (at least they are much rarer than a GPSDO alone), and it
is a lot easier to do something which is attractive both in price and in
quality compared to what you can buy.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Z3805 incident

2012-11-29 Thread Dennis Ferguson

On 29 Nov, 2012, at 02:32 , Charles P. Steinmetz 
charles_steinm...@lavabit.com wrote:

 This is a classic crystal jump. The crystal changed its frequency magically 
 from one second to the next and the software compensated for it
 
 Here is another example of a 3805 having a bad moment.  For just about two 
 minutes, it reported a phase jump of nearly 3 uS and then immediately fell 
 back nearly to its previous baseline, settling to the baseline in about an 
 hour and not requiring any longer-term change of the EFC voltage.  This does 
 not look like a typical crystal frequency shift to me, but I cannot rule that 
 out.  It looks more like what I'd expect to see if I set the cable delay to 3 
 uS for 2 minutes, then back to 0.

I think I would be more likely to call this one, where the crystal jumps
to another frequency for a while and then jumps back to about what it was,
a classic crystal jump.  I've seen this before, though not as large as the
change you show.  I hear these raise hell when they try to use PTP to transmit
telecom-quality timing over asynchronous ethernet because it is hard to run
a PTP control loop tight enough (i.e. at a high enough data rate) to correct
that before it does damage.

I think the other problem, with the crystal jumping to another frequency and
apparently staying there (I'm assuming it hasn't jumped back), could have a
broader range of causes.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] eBay Ublox

2012-11-21 Thread Dennis Ferguson

On 21 Nov, 2012, at 10:34 , Michael Tharp g...@partiallystapled.com wrote:
 With respect to interrupt latency, the PPS driver is the best you're going to 
 get without a custom add-in card that provides input capture (timestamping). 
 I considered going that route but making PCI or PCI-e cards essentially means 
 using FPGAs which are expensive and fussy, and sort of overkill. A I2C or SPI 
 based card for use with Raspberry Pi or other single-board computers is also 
 a possibility.

I did a prototype PCI-X card like that some years ago.  It
was expensive but worked pretty well; the high-speed part
of the FPGA ran at 320 MHz, so timestamps were taken with a
resolution of just over 3 ns.  Having a card plugged into
your computer which knows what time it is to within 3 ns,
but which is on the far side of a bus across which a single
PIO read takes 80 ns to complete, is of limited direct use
to the computer itself, however, so the hardest part of the
design involved minimizing the uncertainty of the time
transfer from the card's clock to the computer's system clock.

I'm not quite sure that it always requires custom hardware
to get a few 10's of nanoseconds precision for PPS sampling,
though.  Some off-the-shelf hardware may also work for this.
In particular the on-chip clock peripherals included in the
TI CPU used for the BeagleBone board have hardware
timestamp-capture inputs (the TIMER4-7 i/o pins on one of the
cape connectors) which seem to do the right thing.  That,
plus the fact that the TI chip also has an onboard Ethernet
mac core (as opposed to the USB Ethernet device the Raspberry
PI uses) is making me think that that the BeagleBone might
be a bit better base on which to build a good NTP server
than the RPI is.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Timing performance of servers

2012-10-26 Thread Dennis Ferguson

On 26 Oct, 2012, at 08:06 , shali...@gmail.com wrote:

 If you cannot see the horizon because of obstructions (what else?), these 
 obstructions are likely to be a source of multipath. So while technically you 
 do not need to see the horizon, any obstruction above the horizon could cause 
 problems. Of course, distant trees or a hill are less likely to be a problem 
 than your neighbor's garden shed with a tin roof.

Though, as I understand it, typical low-end GPS antennas are quite
sensitive to multipath arriving from below the horizon as well.
I think getting a sharp antenna cutoff at the horizon is the reason
that high-end antennas have choke rings.

 Also, some antennas are better at rejecting low angle signals than others. 
 While the software can reject some undesired signals, it can only do so if 
 the software can identify them as separate. If the multipath signal 
 destructively interferes with the desired signal, there is not much the 
 software can do.

Given that the transmitted C/A bandwidth is greater than 1 MHz, however,
I'm not sure that it is possible for multipath signals to destructively
interfere across the entire bandwidth; I think the issue is distortion,
with some frequencies in the bandwidth suffering destructive interference
while others are constructively interfered with.  This can be compensated
for in software, though it is much better not to have to.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Adjusting HP 5065A frequency

2012-10-22 Thread Dennis Ferguson

On 22 Oct, 2012, at 12:48 , Bob Camp li...@rtty.us wrote:
 The big bucks approach probably is to run a good RF ADC on the input and
 then do all the offset stuff as DSP math. The VCXO just sits at it's magic
 frequency and never moves. More money / no pops. 

A somewhat cheaper way might be to use a DDS.  That is, lock
the clock driving the DDS to the input frequency and then
program the DDS to correct the measured error of the input
clock.  A DDS with a 48 bit control word will have an effective
resolution of about 4e-15, if my arithmetic is right, which
seems adequate for the purpose.

The DDS also gives you the option of generating any (corrected)
output frequency you want.  The output frequency could even be
programmable if you don't mind looking at the DDS digital noise
in the output, though that could be cleaned up by picking a fixed
output frequency ahead of time and adding a cleanup PLL for the
chosen frequency following the DDS.

I'm not sure why this problem isn't always dealt with this way,
actually.  Since the corrections are applied in digital arithmetic
the precision with which they can be made is limited only by the
bit-width of the adders you use to compute each cycle's update and,
given that the D/A converter it is driving is probably going to be
limited to 300 or 400 MSPS, even an FPGA (let alone semi-custom
logic) could carry more bits through the computation than are useful
to have.  There is probably some catch to this that I don't understand.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Are serial port headers standardized?

2012-10-20 Thread Dennis Ferguson

On 20 Oct, 2012, at 02:05 , Sarah White kuze...@gmail.com wrote:
 Page 15, there is a yellow 10 (9) pin header, and page 26 was what I
 quoted. Really wish there was more information... I've had this
 motherboard for something like 5 years at this point, and am fairly
 certain I lost or outright tossed the serial port headers.
 
 Are they fairly standard?

I think there are two standard variants, this one

http://www.pccables.com/07120.htm

and this one

http://www.pccables.com/07121.htm

For Intel motherboards I've only seen the first one used,
but I don't know about anything else.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] To use or not to use transmission line splitters for GPS receivers

2012-10-09 Thread Dennis Ferguson

On 9 Oct, 2012, at 12:48 , Bob Camp li...@rtty.us wrote:
 If you are after sub ns level timing, things are a bit different than if you
 are happy with tens of ns error. Few of us have an adequate survey of our
 location to *really* worry about sub ns numbers. If you are one of those
 lucky few that can worry about sub-ns, yes mismatch and voltage and a whole
 long list of things matter. The temperature coefficient of your antenna also
 gets onto that list at some point. 

I think you can get sub-nanosecond time (if you can arrange for a proper
equipment calibration) and sub-centimeter positioning on your own using
the IGS products and GPS Precise Point Positioning techniques.  The gotchas
are that you need to have a high-priced dual-frequency, carrier phase
tracking receiver and the software you need seems to only be available to
the very rich (though there are free online services which will process
your data to determine the location for you).

The antenna temperature thing is kind of indicative of just how much lore
and black art seems to be involved in arranging equipment for fine timing,
however.  I have the ITU 2010 Handbook for Satellite Time and Frequency
Transfer and Dissemination.  In Chapter 12, when discussing GPS Common
View techniques, the document says this about antenna temperature

12.5.2 Temperature stabilized antennas

It is now well documented, and generally admitted, that GPS time-receiving
equipment, and more specifically its antenna, is sensitive to environmental
conditions [Lewandowski and Tourde, 1990]. For conventional GPS 
time-receiving
system this sensitivity could be expressed by a coefficient of about
0,2 ns/°C and can approach 2 ns/°C. This was a major precluding obstacle,
as it did, the goal of 1 ns accuracy announced earlier for GPS time 
transfer.

and goes on to recommend using an antenna with an oven keeping the temperature
of the electronics constant.  In Chapter 13, on the other hand, when discussing
GPS PPP, it says this:

There have been some poorly supported claims of strong variations of
geodetic clock estimates with temperature changes in some GPS antennas,
together with recommendations to use temperature-stabilized units. While
this might apply to certain low-end, single-frequency units, direct tests
of a standard AOA Dorne Margolin choke ring antenna have failed to detect
any sensitivity of the clock estimates to antenna temperature variations.
Ray and Senior [2001] placed an upper limit of 2 ps/°C on the short-term
(diurnal) temperature sensitivity and later extended this to 10.1 ps/°C
for any possible long-term component [Ray and Senior, 2003]. Even smaller
sensitivities, 0.17 ps/°C or less, were determined by [Rieck et al., 2003]
for an Ashtech choke ring model.

So Chapter 13 says that what Chapter 12 said is bogus.  It appears that Chapter 
12
may have written been written by a European while Chapter 13 is an American
effort, so this may be some sort of cultural thing.  Chapter 13 does later go
on to point out how crappy the Canadian IGS stations are in the winter and
blames this on snow and ice in the near field below the antenna, so even Chapter
13 does find a use for heating at the antenna.  Both chapters do agree that 
keeping
the temperature of the receiver constant is good.

I think the antenna splitter thing is probably the same kind of issue.  Someone,
somewhere, may have had a problem with an antenna splitter and published a paper
on that, and this in turn reinforces the conservative assumption that you should
leave anything out that doesn't absolutely need to be there, so it has become
common wisdom that you should avoid splitters.  Or something.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] WWVB / Xtendwave patents

2012-09-27 Thread Dennis Ferguson

On 27 Sep, 2012, at 15:40 , Jim Lux wrote:

 On 9/27/12 2:58 PM, Bob Camp wrote:
 Hi
 
 It would be interesting to hear what the patent lawyers on the list think 
 about the patents. Given a quick read, they appear to cover any use of the 
 specific transmitted format for receiving time information.
 
 IANAL, but..

Me neither.

 reading Claim 1..
 a key aspects are the combination of PSK and ASK, with different data. This 
 is somewhat unusual, and may not have been done exactly like that.
 
 said phase modulation *is independent* of the information represented by 
 said pulse width modulation/amplitude shift keyed modulation is a phrase 
 that occurs in ALL of the independent claims.
 
 (my emphasis added)
 
 QAM is, of course, simultaneous PSK and ASK, but it's a single data stream 
 that is being encoded.
 
 Is there prior art for transmitting one kind of data using ASK and something 
 else PSK?
 
 For instance, is WWV (which is primarily ASK) has a subcarrier, but the 
 subcarrier is also AM.
 
 Another possible source of prior art might be a PSK encoded digital squelch 
 on a AM or FM modulated signal (if such a system exists).

I wouldn't mind knowing a legal definition of information since to
me most of what is carried in the PSK is the same information as is
carried in the ASK, just formatted with different bits: known markers
to find minute alignment, minute-of-the-century time, leap second warning
and daylight savings information.  The ASK alone encodes UT1 while the
PSK has expanded DST information, but most of it is not what I would call
independent information even if the bit encodings are different.  I assume
a lawyer's definition is not the same as mine.

Beyond that, though, it really does seem like they are attempting to
patent all receivers of the new WWVB format which use the phase
modulation, while pruning the claims enough to avoid existing DCF77
and BPC receivers.  The timing information based on a known sequence
spanning multiple seconds avoids DCF77's 0.8 second (and BPC's 0.6
second) known sequence, and independent information thing seems to
exist to distinguish it from DCF77 which sends the same bits with both
its PSK and ASK (BPC likely does too).

The not-yet-granted patent actually seems more odious, since it seems
to be attempting to claim the idea of using past time measurements to
compute the frequency error of a clock's oscillator so that future
timekeeping can be improved by correcting that.

That's too bad.  One can only hope the patent they got is defensive and
they don't plan on generally enforcing it, and that none of the claims
in the other one survive the obviousness test.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] WWVB Now a Monopoly

2012-09-26 Thread Dennis Ferguson

On 26 Sep, 2012, at 10:03 , J. Forster wrote:

 You go after everything. Soup to nuts, including the contract agreements.
 
 IMO, this is potentially very, very big money, because Xtendwave may also
 have patent protection, and henceforth control all the precise digital
 clock market. This is tens of millions of units, at least.

They claim to have applied for patents on something but I would be
surprised if they could patent anything that would prevent anyone
from designing their own receiver.

What would annoy me is less-than-full disclosure of the transmitted
signal and its properties.  For example, there's a claim in the paper
that the (31 26) Hamming code used can detect double-bit errors in the
encoded time.  I think detecting double-bit errors would require an
additional parity bit, and that the assertion in the paper is just a
boo-boo, but I also keep wondering if the claim might in fact be true,
that there might be a really clever way to use that with something else
in the signal to detect double-bit errors, and the paper just isn't
pointing that out.  That would be annoying.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] New WWVB format...

2012-09-26 Thread Dennis Ferguson

On 26 Sep, 2012, at 11:19 , Majdi S. Abbas wrote:

 On Wed, Sep 26, 2012 at 10:13:22AM -0700, Tom Van Baak wrote:
 My reading of the document(s) is that the new format will in fact allow 
 WWVB to be used as a frequency standard with even greater precision then 
 before, though not with unmodified legacy WWVB carrier receivers. My hope 
 is that one of you will produce a clever reference design for such a TF 
 receiver make it available to the group. It sounds like a very fun DSP 
 project; one that we can all learn from. Bonus points for making it an 
 open-source Arduino shield. Making it work with both DCF77 and WWVB would 
 also be a plus.
 
   DSP would be good, although I also think an microcontroller
 implementation could be interesting.  Atmel's ARM MCUs look like they'd
 be good candidates for this sort of thing.  (Pretty fast, enough storage
 to do interesting things with it, and a fast enough ADC for 60 KHz.)

This is fine, though to make it maximally useful for time and
frequency purposes I believe the hardware might need to provide a
way to synchronize the ADC clock to an external reference, and likely
some way to time-mark the incoming data (e.g. a quick-and-dirty version
might feed a PPS signal to the second channel of a stereo ADC, if no
more elegant solution is available).  A control loop to discipline an
oscillator's output might use that oscillator to clock the ADC and adjust
the oscillator to zero the ADC's phase alignment with the input signal,
if that can be made to work.  A system to measure WWVB propagation delays and
signal levels might instead clock the ADC and the time marker with a
known-accurate frequency and PPS (e.g. a GPSDO).

RFSpace makes commercial LF/MF/HF SDR equipment with almost the right inputs
for this (an external frequency input and a timing trigger).  What I'd like
is a tiny-budget version of this just for LF stations.

   I've got a couple of these that I might use as a development 
 platform:
 
   https://www.olimex.com/Products/ARM/Atmel/SAM7-P256/
 
   Has anyone come up with a reasonable algorithm to implement in
 a microcontroller?  (DSP development kits are a bit more spendy than I'd
 like to invest in a prototype. :)

I guess the trouble with this is only that the availability of brute force
can sometimes make it unnecessary to deal with a lot of complexity.  If your
job is to do a convolution of a model of what you know was transmitted
against the incoming signal to measure the time alignment then using a
platform where you can store big blocks of data and do Fourier transforms
with wild abandon can provide really good results without having to spend
a lot of time thinking about it.  Even quite modest modern PC hardware comes
with a boatload of memory and is exceedingly speedy, and for some purposes
it can save a lot of time and effort just to make use of that compared to
trying to do without.

I have a quick-hack DCF77 PM detector which runs on PC hardware and makes use
of one of the above-mentioned RFSpace receivers for the data acquisition.  While
it is now in boxes being moved, when I get it back up I would love to lose the
RFSpace receiver in favor of something much less costly, but would hate trying
to make this work with something less capable than the PC.  Using a 
microcontroller
like that to do the A/D conversions and send the data collected out (say) an
ethernet port to a PC which does the heavy computational lifting (that's what 
the
RFSpace receiver does) would appeal to me, but trying to do without the PC would
not.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] New WWVB format...

2012-09-26 Thread Dennis Ferguson

On 26 Sep, 2012, at 14:43 , paul swed wrote:

 Might be a bit of a cost. The SDR runs $1495.
 Regards
 Paul

The ones with the clock input options (the SDR-IP
and the NetSDR, I think) are significantly more than
that. But they are also huge overkill if all you want is
a digital LF receiver.

That's why I'd like to replace it with something cheap,
but that something wouldn't be nearly as useful without
the clock and timing edge inputs.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Hi Power LED Light power supply...

2012-09-18 Thread Dennis Ferguson

On 18 Sep, 2012, at 12:42 , Chris Albertson wrote:
 On Tue, Sep 18, 2012 at 5:21 AM, Bob Camp li...@rtty.us wrote:
 Hi
 
 I suspect those same 120Hz sensitive people would not be able to watch TV or
 a movie :)
 
 In the old CRT type TV sets, the phosphor has some persistence.
 Movies are modulated with a square waves, the frame blinks off and
 goes dark then blinks on.   But the LED's brightness is fast enough to
 track the sine wave and would be bright only for an instant with quick
 pulses of light.

Just to add to this...

Ontario, Canada originally ran its power grid at 25 Hz.  When they
switched the grid to 60 Hz in the 1930's some of the industrial power
users, particularly in northern Ontario where private (usually
hydroelectric) power generators were common, never got around to changing
their plants over.  Mine and paper mills using 25 Hz power were common as
recently as the 1980's, and might still be there for all I know.

Standard incandescent light bulbs don't have a lot of persistence when
run on 25 Hz power (I assume there might have been a time when you could
buy incandescent bulbs designed for 25 Hz, but not in my lifetime).  They
don't go entirely off, but they get significantly dimmer in the visible
spectrum in the dips as the output red-shifts towards the infrared; they
follow the sine pretty well.  In my teens, when visiting a place using
25 Hz power for lighting, I could initially see an incredibly annoying
flicker when I first got there but after a minute or two this would fade
and I'd no longer notice it.  Some other people would also see the flicker
but others, including my parents, couldn't see it at all so there seemed
to be variation (maybe age-related, maybe not) among individual abilities
to see this.

I would hence believe that a 50 Hz flicker must be pretty close to the edge
of what can be perceived, so I'm having trouble believing that a flicker
at more than twice that rate would be perceptible at all by anyone.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Hi Power LED Light power supply...

2012-09-18 Thread Dennis Ferguson

On 18 Sep, 2012, at 15:06 , John Lofgren wrote:

 snip
 I would hence believe that a 50 Hz flicker must be pretty close to the edge 
 of what can be perceived, so I'm having trouble believing that a flicker at 
 more than twice that rate would be perceptible at all by anyone.
 snip
 
 Oh, but it is.  A couple of years ago I bought one of the Chinese 30 LED spot 
 light bulbs for about $8 on ebay.  I thought I'd give it a try for a 
 workbench light.  When I plugged it in at work (60 Hz power, here) the two 
 guys standing behind me yelled gaahhh at the same time I did.  The flicker 
 was horrendous.  The earlier comment about peripheral vision also applies, 
 though.  It's worse in the periphery than in direct view.
 
 The power supply is nothing more than a bridge rectifier, two current 
 limiting resistors, and a filter capacitor.  The capacitor obviously wasn't 
 big enough, though, because it flcikered plenty.

Or could the problem have instead been that one side of the
bridge wasn't working, so you were getting a 60 Hz flicker
rather than 120 Hz?

Having seen what I am sure was a 50 Hz flicker, I'd believe
that 60 Hz might look awful but I still have some doubt about
120 Hz.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPSDO control loops and correcting quantizationerror

2012-09-16 Thread Dennis Ferguson

On 16 Sep, 2012, at 00:40 , Tom Van Baak wrote:
 I worry in your example about the long cross-over time. This may be ideal for 
 frequency stability, but probably is not good for time accuracy. If one is 
 using the GPSDO as a timing reference, I would think a shorter time constant 
 will keep the rms time error down. Has anyone on the list done work 
 optimizing the timing accuracy rather than the frequency stability?

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant. The time error is the time integral of the frequency
error, so anything which manages to minimize the frequency error
of the oscillator (both the magnitude of the error and its
duration) will also minimize the time error.  The time constant
is selected to be the minimum value which makes it probable that
the frequency or time error you have measured (for a PLL the data
are time errors) is in fact an error that the oscillator has
made rather than an artifact of the noise in the measurement
system.

There might be a difference in the best control action to take to
optimally achieve each of those goals.  In particular if your goal
is frequency accuracy the best control action in response to the
measurement of a frequency error might be to correct that error,
i.e. to minimize the frequency error once you know you have one.
If your goal is time accuracy, however, then the response to a
measured frequency error is going to be to intentionally make a
frequency error in the other direction for a while to correct the
accumulated time error.  In this case, though, it seems to me
that by selecting a PLL as the control discipline (rather than, say,
a FLL) you've already made the decision to take control actions
which ensure time accuracy.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PP2S

2012-09-16 Thread Dennis Ferguson

On 16 Sep, 2012, at 17:11 , Tom Van Baak wrote:
 Some GPSDO have both a 1PPS and a PP2S (pulse per 2 second) output. I have 
 two questions for one of you telecom experts: 1) What is the history, and the 
 purpose of that PP2S signal? 2) What is the official spec for which second 
 the PP2S lands on? Is it odd seconds or even seconds? Is it GPS time (easy) 
 or UTC (problematic)? If UTC, what happens after a leap second?

The PP2S signal is a US CDMA (i.e. CDMA2000) thing.  It is aligned
to the even seconds in GPS time.  My memory is dim but I think that
the choice relates to the fact that the CDMA spreading code LFSR
rolls over every 26.666 ms (it is a 15 bit LFSR, so dividing 32767
by 26.666 ms should be the 1.228 MHz chip rate), so it rolls over
75 times every 2 seconds.  The goal is to align the code sequence
transmitted by every station, and a 1 PPS timing reference wouldn't
guarantee that since 1 second isn't an integral multiple of the
roll over time.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPSDO control loops and correcting quantizationerror

2012-09-16 Thread Dennis Ferguson

On 16 Sep, 2012, at 16:30 , Poul-Henning Kamp wrote:

 In message 34d5c3ce-6b3d-4944-996a-7637373b2...@gmail.com, Dennis Ferguson 
 wr
 ites:
 
 I'm not sure there could be a difference between the goals of
 frequency accuracy and time accuracy that would effect that time
 constant.

Note that the that time constant referred to here, the topic of
the message I was responding to, was explicitly a PLL time constant.
If you have decided to use a PLL as your control discipline I think
you end up with the same time constant whether your goal is accurate
frequency or accurate time since, with a PLL, these end up being
the same problem.

 It does. 
 
 A PLL more or less corresponds to an PI regulation, where a FLL
 only needs to have the I term.
 
 Because you don't have the interaction between the P and I terms,
 the I-timeconstant can be longer.

This sounds right.  As I said, if you pick a control discipline other
than a PLL, as might be advantageous to do if your concern is solely
with accurate frequency, then the optimum might be different.  If you
are using a PLL in both cases, however, then the problems are
essentially the same.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] oscillators

2012-08-30 Thread Dennis Ferguson

On 30 Aug, 2012, at 13:14 , Rick Karlquist wrote:
 The other area where a uP is useful is in an environment with high
 vibration.  It can correct for acceleration as well as temperature.  There
 
 I've never heard of this being done.  Do you have a reference?

I'm not sure how that would be done with a single crystal but I heard
a talk by David Allan about a (carefully oriented) array of 6 crystals
for which that was done.  A Kalman filter which understood the physics
of acceleration effects was applied to the output of the 6 crystals
to produce a composite clock and, as a side effect, inertial navigation
information.  This allowed the composite clock to be corrected for
g-effects as well as providing the same data that would be output by
a conventional, mechanical inertial navigation unit.

I think the target application was a drone aircraft, with the clock
output ending up at a GPS receiver while the inertial data was used
as a fallback if the GPS was jammed and as a sanity check to detect
GPS spoofing.  This seemed like a nice one-stone-several-birds solution.
I have a copy of the powerpoint somewhere, but I've not seen this
written down anywhere else.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Time-nut gettogether in Boulder

2012-06-06 Thread Dennis Ferguson

On 6 Jun, 2012, at 16:44 , Alan Melia wrote:

 Then there is the NPL Time  Frequency Club at Teddington..oh sorry that 
 is the wrong side of the Atlantic :-))
 
 Alan
 G3NYK

Is there still?  Google comes up with this from 2004

http://www.npl.co.uk/content/conWebDoc/2054

but the link to the club home page at the bottom goes no where.

I would attend if it weren't defunct.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] thunderbolt no UTC offset

2012-05-01 Thread Dennis Ferguson

On 2 May, 2012, at 04:30 , Tom Van Baak wrote:

 The UTC offset is in words 6-10, page 18, subframe 4 -- every 12.5 minutes.
 /tvb

The leap second warning is also in the same place.  How the unit could
know that a leap second is pending but not know the UTC offset is a
mystery.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PICTIC II ready-made?

2012-04-26 Thread Dennis Ferguson

On 27 Apr, 2012, at 02:57 , Chris Albertson wrote:
 Closed source drivers and binary blob firmware.I'd have nothing to do
 with a project that includes either of those. I'd require a open source
 platforms with a 100% free tool chain.   Also, it is a bit of overkill
 after all a bare PIC works fine for this application.

The only thing binary blob about the Raspberry Pi that I can see seems
to be the stuff associated with the graphics accelerator.  This is unfortunate,
but unless you are into bare-metal programming of high resolution graphics/video
(you can do that with an Arduino?) I'm not quite sure how that is relevant.

The programming interfaces for all peripherals on the SoC not associated with 
the
GPU seem to be well documented in the Broadcom data sheet:

   
http://www.designspark.com/files/ds/supporting_materials/Broadcom%20BCM2835.pdf

I don't know of anything useful which is missing, and I know a port of a 
non-Linux
operating system (NetBSD) to the board is being done using nothing that is 
non-public.
You can leave the graphics binary blob out if you find that offensive and 
have no
use for it anyway.

I think the most annoying thing about the Raspberry Pi is that a lot of the GPIO
signals aren't brought out to connectors, perhaps to save money on the board.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] WWVB phase modulation test April 15-16

2012-04-11 Thread Dennis Ferguson
The WWVB web page at NIST, here

http://www.nist.gov/pml/div688/grp40/wwvb.cfm

has a notice about another phase modulation test on
Sunday and Monday.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] NTP jitter with Linux

2012-04-05 Thread Dennis Ferguson

On 5 Apr, 2012, at 13:03 , shali...@gmail.com wrote:

 An older laptop (Pentium M for instance) can be had for $80 or so any day of 
 the week, won't take much space, is completely standalone (built-in keyboard 
 and display, built-in battery backup) and sips power when idle, which it will 
 be most of the time.
 
 The only issue is that you might be tempted to run more things on it and 
 affect NTP performance. But if you load it with BSD and use it just for that, 
 it will be a dandy solution.

I'll take some issue with that.  The best clock source for software
running on a computer, particularly when the applications might be
expected to take a lot of time stamps, is one which (a) works, (b)
has reasonable precision, and (c) is as cheap as possible to sample.
On Intel processors the most precise and inexpensive-to-read counter
available (i.e. conditions (b) and (c)) is the TSC, so you want to
use this if at all possible.  The problem with using the TSC is that
it sometimes violates condition (a), that is the works part.  On
some older processors it does not necessarily increment at a constant
rate, and on boards with multiple CPUs there can be multiple TSCs with
different times, so in these cases you may have to use something else
which isn't as good as the TSC would be if it worked.

For some random computer whose clock you want to set this is all fine.
It should use a counter known to work, and if the TSC doesn't it should
just use something else.  For something you are buying to be a dedicated
NTP server, however, it is worth while adding working TSC very high
on the list of desirable attributes.  The problem with some older CPUs,
like the Pentium M, is they are of a vintage which did not guarantee the
TSC would increment at a constant rate, and a variable rate makes it useless
as a clock.  It is possible you could work around that at some cost (maybe
the sips power when idle part) but I'm of the opinion that life is too
short for that and it would be better to pick something with a known-to-work
TSC for this application.

The Atom processors aren't bad.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] NTP jitter with Linux

2012-04-04 Thread Dennis Ferguson

On 4 Apr, 2012, at 16:10 , Mike S wrote:

 On 4/4/2012 6:51 PM, Eric Williams wrote:
 Could the CPU be reducing its clock rate when it's not being loaded?  Just
 a guess, most multi-core processors these days have power saving features
 like that.
 
 On Wed, Apr 4, 2012 at 3:22 PM, Mike Smi...@flatsurface.com  wrote:
   I've played around with different
   cpufreq setting, thinking it might be related to the processor speed 
  during
   an IRQ varying, but that seems to have minimal impact (performance vs.
   conservative vs. ondemand).
 
 Setting /sys/devices/system/cpu/cpuX/cpufreq/scaling_governor to 
 performance should lock that core to the max clock rate.
 
 In looking that up, I found that the script I made to set this was just doing 
 cpu0 (i.e. one of four cores). Doh! I've changed it to do all 4 cores, and am 
 trying that again to see if that's it.

I don't know much about Linux but if that doesn't help try to find out what
the operating system does in its idle loop.  If it is ending up in some
power-saving state when it is idle it may be volunteering to do this by
executing some magic `wait' instruction which does the power-saving thing as
a side effect, and if you can find where it does this you might be able to
work around it.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] LEA6-T Group Buy

2012-04-02 Thread Dennis Ferguson

On 2 Apr, 2012, at 13:11 , lstosk...@cox.net lstosk...@cox.net wrote:

 
 
 Note that there should soon be a LEA6T eval board available from sysmocom
 
 http://laforge.gnumonks.org/weblog/2012/03/16/#20120316-osmo_lea6t_gps_timing
 
 Estimated price is 90 EUR excl VAT in the EU.
 
 Anyone know if these will have the RAW output for use with RTKLib?  Also 
 assume can program for 10 MHz out?
 
 Had the door slammed in my face when trying to buy an eval board for the 6-T 
 from the mfgr.

?? I bought an EVK-6T from the manufacturer a little while ago
without trouble.  The only thing associated with the transaction
that I wasn't perfectly happy with was the price.

I should say, though, that the manufacturer's eval kit puts the
board in a rather nice case, with the two programmable pulse outputs
only being available on the DE-9 RS232 connector, at RS232 voltage
levels.  Not only are RS232 voltages inconvenient for many purposes,
but the MAX3232 converter they used adds 100 ns of delay.  The high
speed versions of the signals are available on the board, but you
need to take the board out of the case to use them and they use those
teeny, tiny MMCX connectors.  Given a choice I'd rather have a board
without paying for a case I have to throw away, with better connectors
for the timing signals.

I think this is a very good 50 channel GPS receiver, though, and the
manufacturer's board isn't bad if you want to play with it from a computer
since both PPS outputs and the Extint input are conveniently tied to RS232
control pins.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] LEA-6T Group buy

2012-03-26 Thread Dennis Ferguson

On 26 Mar, 2012, at 12:56 , Chris Albertson wrote:
 On Mon, Mar 26, 2012 at 12:50 PM, Attila Kinali att...@kinali.ch wrote:
 Moin,
 
 I have the numbers together for the group buy of the u-Blox LEA-6T.
 
 I have not been able to locate a spec sheet for there.  Do you have a
 link?   Or maybe you could say how these are improved over the M12M?

Apart from other issues it may potentially provide significantly
more accurate time for computer timekeeping via the Time Mark input,
depending on the accuracy of measurements on that input.

That is, rather than connecting the PPS output to a computer input pin
and then trying to timestamp when the interrupts occur, one can instead
tie an output pin from the computer to the Time Mark input and poll
for timestamps measured by the LEA-6T, say with a programmed sequence like

get computer clock timestamp
toggle Time Mark pin
get computer clock timestamp
. . .
ask LEA-6T for Time Mark timestamp

If done well that may get the time ambiguity at the computer end down from
the  microsecond level of interrupt latency to the  100 nanoseconds it
should take for a PIO write to a hardware register to complete.  If the
LEA-6T takes Time Mark timestamps with that precision then this may be a
significant improvement.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] WWVB phase plots

2012-03-19 Thread Dennis Ferguson

On 18 Mar, 2012, at 10:52 , John Seamons wrote:
 They do talk about using the 11-bit Barker code for autocorrelation. But the 
 sync bits transmitted only match the Barker code if you interpret them a 
 little bit out-of-order.

The part of the paper that talked about the Barker code confused me
somewhat since I couldn't quite figure out how it was relevant.  The
autocorrelation property of the Barker code is only interesting if
the Barker code is the only thing being sent (over and over), but
in this case the concerns are more about spurious correlations with
the variable data, something for which no solution seems to be
possible.

It is the case, however, that (non-circular) autocorrelations of
the fixed sequence are relevant at small offsets.  In your data the
fixed sequence seems to be

-1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, -1, -1, -1

which, ignoring the contribution of the variable data (which increases
with increasing offset), gives this basic result for offsets from 0 to
13 seconds:

14, 1, 2, 1, -6, 1, -2, -3, 0, -3, 0, 1, 0, 1

So there is a quite large autocorrelation at 4 seconds offset.  If I
weight the search pattern by the fixed pulse widths (there are 3 0.2
second pulses and 3 0.8 second pulses in the fixed sequence; I gave
the rest a weight of 0.5) that gets a little better, i.e.

7.0, 1.4, 0.4, 0.5, -2.1, -0.4, -0.4, -0.9, -0.3, -0.9, 0.0, 0.5, 0.0, 0.2

if I did that correctly, though at the apparent cost of making the
autocorrelation at a 1 second offset a bit worse.

In any case, if this is the pattern they selected I really would have
liked to have seen a discussion of the tradeoffs involved in picking
it, along with the assumptions they made about how it would be
detected.  And I kind of hope I don't have to read about that in
someone's patent since technical descriptions written by lawyers are
really boring.

In any case, I think the paper left out the good parts.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] WWVB phase plots

2012-03-19 Thread Dennis Ferguson
DCF77's AM modulation is a much better fit for what they did, and a
much better design in general.  All the useful phase modulation needs
to be carried by the carrier at full power.  DCF77's AM modulation drops
the carrier power for only 100 ms or 200 ms at the beginning of the second,
which gives them a full 0.8 seconds in every second at full power (if I'm
remembering right the minute marker has no carrier reduction, so the very
longest carrier reduction is only 0.2 seconds).  Their chip sequence is
just under 0.8 seconds long and sits in the full power part of each second.
WWVB is not nearly so convenient.  The carrier reductions for WWVB are
deeper than DCF77, making it even more imperative that the information be
carried in the high power segments only, but WWVB's carrier drops are 0.2,
0.5 or 0.8 seconds long, so in many seconds they only have 0.5 seconds of
high power and in 7 seconds per minute there is only 0.2 high power seconds.
I think there's no good way to make DCF77's silk purse out of the WWVB
sow's ear.

It is also the case the DCF77's phase modulation probably isn't as good
as it could be if the goal is to find it in the noise since it only swings
+/- 15 degrees rather than +/- 90.  Its big advantage might be that it
is high speed, with lots of transitions, so you can probably measure
phase alignment pretty accurately with that.  As a national time service,
however, it only needs to serve a fairly compact country relative to
WWVB's intended coverage area, so that plus WWVB's crappy AM format
probably pushed them to forget about trying to match DCF77 and to
just concentration on doing the best they could to improve coverage.

That would be my guess, anyway.

Dennis Ferguson


On 19 Mar, 2012, at 19:47 , ehydra wrote:

 Hm. I had a quick look at http://en.wikipedia.org/wiki/WWVB
 I cannot see why it won't work with the DCF77 scheme. The carrier is
 always on-air. Do I miss something? To low bandwidth of the transmitting
 antenna?
 
 Sorry, I didn't followed the thread in whole.
 
 - Henry
 
 
 Brooke Clarke schrieb:
 Hi Henry:
 There are millions of WWVB clocks in use and the new signal must be fully 
 compatible with them.
 
 
 
 -- 
 ehydra.dyndns.info
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] WWVB BPSK Receiver Project?

2012-03-16 Thread Dennis Ferguson

On 14 Mar, 2012, at 18:08 , Brooke Clarke wrote:
 The WWB paper New Improved System for WWVB Broadcast given at the 43rd PTTI 
 November 2011 is at:  http://jks.com/wwvb.pdf
 
 Part of the processing gain comes directly from the BPSK modulation and that 
 amounts to a little over 10 dB improvement, but there's a further 18 dB gain 
 to be had by accumulating an hours worth of data and processing that.

It is a little interesting that the PTTI paper left out some
of the interesting details one would need to actually decode
the new signal, in particular the specification of the 14
second Sync sequence, which is necessary to know to find
the alignment of minutes, and the value of the 60-bit
hour-synchronization code, which defines the sequence
of phase reversals in each minute's modulation in an
hour and, as I understand it, is necessary to know to
take full advantage of the hour-averaging thing.

I assume this might have been done to allow the company
which participated in the design of the signal to complete
a receiver for it before they start transmitting that
way while keeping anyone else from starting a receiver
project until after the transmissions start?

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Loran in the US

2012-03-08 Thread Dennis Ferguson

On 8 Mar, 2012, at 02:58 , Poul-Henning Kamp wrote:
 Has anybody asked them how good timefreq they're trying to deliver ?
 
 I would assume that they are aiming for a backup for GPS in
 telecom-GPSDO context.
 
 If so, frequency stability is priority number one and time is
 probably just better than 100msec or so

I could swear I saw something that said 50 ns, though I can
no longer find it and that sounds like science fiction.  I note,
though, that the Federal Register publication for the project,
here:

http://www.gpo.gov/fdsys/pkg/FR-2012-01-11/html/2012-307.htm

indicates they aren't just looking at Loran by itself.  The
MF dGPS bands and 500 kHz are also included in whatever they
are doing.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Loran transmitters back on the air.

2012-03-01 Thread Dennis Ferguson
The publication in the federal register, here

http://www.gpo.gov/fdsys/pkg/FR-2012-01-11/html/2012-307.htm

says they are playing with more than Loran.  There are
several MF bands they are playing with as well, in particular
the dGPS bands and 500 kHz.

I noticed a while ago that UrsaNav's UN-151 receiver was advertised
as being capable of processing multiple signals in the LF and MF bands,
and wondered what the MF part was about.  That is a bit clearer now.

Dennis Ferguson

On 1 Mar, 2012, at 21:04 , paul swed wrote:

 Hmmm did find a paper that suggests various goals and such and the old
 loran gear might not work. Depends on what modes they try.
 Would be great to find some form of updated news.
 Regards
 Paul.
 
 On Thu, Mar 1, 2012 at 8:25 PM, paul swed paulsw...@gmail.com wrote:
 
 Eloran is compatible with the older timing rcvrs. Or at least it was
 supposed to be. Now the message suggests that they will try other
 modulation modes. I couldn't find anything really further then what was
 sent.
 I did hook the longwire directly to the austron so far no lock and I am
 less then 70 miles from the Nantucket site.
 Will keep trying
 Regards
 Paul
 
 
 On Thu, Mar 1, 2012 at 8:20 PM, Bob Camp li...@rtty.us wrote:
 
 Hi
 
 The obvious advantage to backwards compatibility would be much greater
 coverage area. It is a bit tough to envision them getting a reasonable user
 population with a 100% from scratch approach. Indeed that may be wishful
 thinking.
 
 Bob
 
 
 
 On Mar 1, 2012, at 8:09 PM, Charles P. Steinmetz 
 charles_steinm...@lavabit.com wrote:
 
 Greg wrote:
 
 A friend in Texas has confirmed that Loran signals are now up and
 receivers are showing position. I am including a note from UrsaNav
 regarding this event.
 
 What are the odds that any long-term deployment would be
 backward-compatible with legacy Loran receivers (not the same as the
 initial tests being backward-compatible)?  The primary revenue stream would
 appear to be from sales of new receivers that use patented technology
 (unless the government wants to get back into the business of subsidizing
 Loran, which it just vacated -- not very likely).  Cynical, maybe, but it
 is always a good idea to keep an eye on the money.  I suppose they could
 make the enhancements transparent to legacy receivers, so you would buy new
 receivers if you needed the enhancements but could also use older receivers
 if you didn't.  But would they?  There does not appear to be an incentive
 to do so, absent a government subsidy.
 
 Best regards,
 
 Charles
 
 
 
 
 
 
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 
 
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Low-long-term-drift clock for board level integration?

2012-02-19 Thread Dennis Ferguson

On 19 Feb, 2012, at 15:56 , Bill Woodcock wrote:
 Hi. This is my first posting to this list, and I'm not a timekeeping 
 engineer, so my apologies in advance for my ignorance in this area. 
 
 I'm building a small device to do one-way delay measurements through network. 
  Once I'm done with prototyping, I'm planning a production run of several 
 hundred of the devices. They'll have a GPS receiver, probably a Trimble 
 Resolution SMT, and they have a bit of battery so they can initially go 
 outdoors for ~30 minutes to get a good fix, but then they get taken indoors 
 and plugged into the network, and probably never get a clear view of a GPS or 
 GLONASS satellite again.  
 
 - From that point forward (and we hope the devices will have an operational 
 life of at least ten years) they'll be dependent on their internal clock and 
 NTP, but we really need them to stay synchronized to within 100 microseconds. 
 10 microseconds would be ideal, but 100 would be acceptable. And in order to 
 be useful, they need to stay synchronized at that level of precision 
 essentially forever. 

 
 My plan, such as it is, was just to get the best clock I could find within 
 budget, integrate it onto the motherboard we're laying out as the system 
 clock, and depend on NTPd to do the right thing with it.  


10, or even 100, microseconds is tough with NTP.  I don't think it is 
impossible, but it
requires a good, reliable network connection and a bunch of work to identify 
and reduce
the systematic errors.  And if NTP == ntpd I'm not sure putting a better 
oscillator
on the board is likely to help all by itself since ntpd's magic internal 
constants are
organized to work with the class of oscillators you typically find in 
computers, and this
would need to be redone to do anything useful with something better.  I think 
making use
of NTP at the 10-100 microsecond level might require doing your own software, 
the generic
reference implementation probably won't cut it.

Before doing that you might consider some alternatives:

- If you are deploying this stuff in the US, and if cell phones (particularly 
Verizon or
  Sprint phones) work where you are installing the stuff, you might look at 
this for a time
  source:

  http://www.endruntechnologies.com/time-frequency-reference-cdma.htm

  This is good if it works everywhere you need it, and assuming CDMA networks 
continue to
  operate for another 10 years.

- Failing that, look at IEEE 1588.  The trouble with this is that it severely 
constrains
  the kind of network the equipment is attached to, and the gear used to build 
that network,
  but if this is in your control you can buy stuff for this without having to 
build it.

If none of the above works, and you just can't get GPS antennas installed, then 
you may be
stuck with NTP, but getting a reliable 10-100 microseconds out of that is a lot 
closer to the
research part of RD then the development part.  I don't think running the 
generic
reference implementation, ntpd, will deliver this.

Dennis Ferguson


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Low-long-term-drift clock for board level integration?

2012-02-19 Thread Dennis Ferguson

On 19 Feb, 2012, at 21:08 , Bill Woodcock wrote:
 It's my assumption that some of them will be able to get enough GPS signal 
 (or GPS via a GSM BTS, as we also have a Sierra Wireless GSM chipset onboard) 
 and would thus be able to act as Stratum 1 servers for the others.

In the US I suspect all GSM base stations have GPS available (E911 support
generally requires it), but I think you may find that in many (most?) other
countries the GSM BTS gear has no idea what time it is.  GSM doesn't require
the time synchronization (it requires frequency, but they can often recover
that from the tail circuit connecting the BTS to the network), so in many
places they do without GPS either to save money or because the carriers are
subject to regulatory requirements to avoid allowing the country's
telecommunications facilities to become dependent on GPS (I assume because
their regulators don't fully trust the owners of GPS)…

GPS can sometimes work under non-optimum circumstances, but it sounds like
you have run out of fallback options apart from trying to advance the state
of the NTP art.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] 32768 Hz from 10 MHz

2012-02-03 Thread Dennis Ferguson

On 3 Feb, 2012, at 14:15 , Orin Eman wrote:

 On Thu, Feb 2, 2012 at 7:16 PM, Hal Murray hmur...@megapathdsl.net wrote:
 
 
 It's possible to use Bresenham with two integers 10,000,000 and 32,768
 but I
 found no way to perform all the 24-bit calculations on an 8-bit PIC quick
 enough. Removing the GCD often helps but in this case the accumulator
 remains 3-bytes wide.
 
 To generate 32 kHz you have to toggle a pin and calculate if the next
 toggle
 must be 38 or 39 instructions in the future; all the math must occur
 within
 37 instructions. That's why I came up with the binary leap year kind of
 algorithm; it's as close to math-less as you can get.
 
 You missed the simple way.  Table lookup.  :)
 
 The table is only 256 slots long.
 
 That's toggling between 305 and 306 cycles.  If your CPU uses N clocks per
 instruction, multiply the table size by N.
 
 
 
 
 Well, I thought table lookup too, but I figured  a 2048 x 1 table.  Easily
 done with a rotating bit and 256 byte table.
 
 
 Assuming clocking a PIC at 10MHz, you have 2,500,000 instructions per
 second.  Since there was talk about time to the next toggle, we have
 2,500,000/65536 instructions between toggles, ie 38.1470... instructions.
 The fraction turns out to be 301/2048, so you have to distribute 301 extra
 instructions over every 2048 half-periods of the 32768Hz waveform.

I only barely know the instruction set on those processors, but it seems
like it should be way easier than that.  You know it is going to be 38 or
39 instructions, so that only question is when it should be 39.  The value
of 250/65536 is 38.1470… in decimal, but in hex it is exactly 26.25a;
that is the 0x26 is 38 decimal while the fractional part is only 10 bits
long.  This means you should be able to compute when the extra cycle is
required by keeping a 16 bit accumulator to which the fractional part
0x25a0 is added at every change and executing the extra instruction when
there is a carry out of that. The seems straight forward.  If `lo' and `hi'
are the two halves of the accumulator then the working part of this becomes
something like (excusing my PIC assembler, which I mostly forget):

movl0xa0,w  // low byte of increment into w
add w,lo// add w to lo, may set carry
movl0x25,w  // high byte of increment into w
btfsc   3,0 // skip next if carry clear
add one,w   // increment w by one; I'm not sure how to do that
add w,hi// add w to hi, may set carry
// if carry set here need extra instruction.  Maybe this does it?
btfss   3,0 // skip if carry set
gotoblorp   // carry clear, don't execute next instruction
nop // the extra instruction
blorp:
// enough instructions more to make 38/39

Maybe someone who knows what they're doing can interpret that?

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS receiver vs local oscillator

2012-02-02 Thread Dennis Ferguson

On 3 Feb, 2012, at 05:07 , Hal Murray wrote:
 I thought the 4th satellite was needed to determine the time.  Wouldn't
 it take a 5th satellite to also determine the frequency of the local clock?
 
 Not really. There are two ways to get the postion and time derivatives. One
 is to either use two fixes which give you each a (x,y,z,t) tuple, while you
 know what your expected delta-t is, you can calculate the real delta-t and
 get from that your frequency offset. 
 
 That's the sort of thing I'm looking for, but I don't quite get it yet.
 
 I have 4 satellites. If I know f, I can solve for x, y, z, and t.  If I don't 
 know f, I'm short an equation.

If you are using an undisciplined free-running oscillator, as most cheap
receivers do, you never know f.  What you know is the frequency written on
the oscillator's package (call it fn, the nominal frequency), but the actual
f is a mystery.  Whatever f is, however, you assume f=fn and use that
oscillator to generate a local timescale to measure signal phases against.

When you solve for x, y, z and t from data generated by measuring the phase
of the incoming signals against your oscillator, the `t' you compute is
actually a delta_t with respect to the local time scale generated from that
oscillator.  The value of delta_t tells you the phase error of your local
timescale, so the rate of change of delta_t from sample to sample tells you
the error in the fn you assumed, that is (f/fn) integrated over the sample
interval.

 If I get two samples, I have 8 equations and I need to solve for:
  x0, y0, z0, t0, and f0
  x1, y1, z1, t1, and f1
 That's 10 unknowns with 8 equations.  I get a 9th equation by setting t1 = t0 
 + 1.  I'm still short one equation.
 
 Can I do something like assume f0 = f1?  That would make sense if the change 
 in frequency is small relative to the noise/error in all the other 
 calculations.

I suspect that if the local oscillator does not exhibit fairly good short
term stability there is no hope of any of this working.  That doesn't matter,
though, since the GPS `t' you compute is actually a delta_t from whatever your
local time scale is, so (delta_t1 - delta_t0) directly tells you how the rate
of your local time scale differs from the rate of the GPS timescale.  The GPS
receiver in fact has no knowledge of the GPS `t' other than as a function of
the local time scale.  The GPS time scale is purely a paper time scale from the
receiver's point of view unless the receiver does the additional work of somehow
using that information to generate a real timescale out of the paper.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Another Trimble Thunderbolt-like GPS?

2011-12-19 Thread Dennis Ferguson

On 19 Dec, 2011, at 11:25 , Rob Kimberley wrote:

 There was a spec issued many years ago to the industry from Lucent I believe
 to come up with a GPS product for base station requirements. 10 MHz, 1PPS,
 OCXO, RS-232 port, and a certain holdover spec.  The Thunderbolt was one,
 Starloc another, NanoSync (from Odetics/Zyfer). There were others.
 
 Rob Kimberley

I think it was actually Qualcomm.  The requirement for GPS time and 10 
microsecond
time synchronization (which informs the holdover spec) came from Qualcomm's CDMA
specification and are unique to it.  GSM, UMTS and (I think) LTE base stations 
can
get by without GPS at all.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Another Trimble Thunderbolt-like GPS?

2011-12-19 Thread Dennis Ferguson
Yes, USA GSM/UMTS operators also use GPS at their base stations.  They may use 
it
for timing, but I think the primary requirement for it comes from E911 support,
and maybe to provide AGPS and/or clock-setting support for phones.

European and Asian operators often do without it (they typically won't set the
time on your phone either).  I know of more than a few countries which in fact 
have
national regulatory constraints against relying on GPS for anything important.
The base stations still need a frequency reference, but they can generally get
that by recovering the clock from the transmission circuit which connects the
base station to the rest of the network.

Dennis Ferguson

On 19 Dec, 2011, at 16:22 , li...@lazygranch.com wrote:

 I was inside a AT$T shack about a month ago. They have GPS timing inside.  I 
 took some photographs, so I will dig up later what timing they use. 
 
 -Original Message-
 From: Dennis Ferguson dennis.c.fergu...@gmail.com
 Sender: time-nuts-boun...@febo.com
 Date: Mon, 19 Dec 2011 14:00:29 
 To: Discussion of precise time and frequency measurementtime-nuts@febo.com
 Reply-To: Discussion of precise time and frequency measurement
   time-nuts@febo.com
 Subject: Re: [time-nuts] Another Trimble Thunderbolt-like GPS?
 
 
 On 19 Dec, 2011, at 11:25 , Rob Kimberley wrote:
 
 There was a spec issued many years ago to the industry from Lucent I believe
 to come up with a GPS product for base station requirements. 10 MHz, 1PPS,
 OCXO, RS-232 port, and a certain holdover spec.  The Thunderbolt was one,
 Starloc another, NanoSync (from Odetics/Zyfer). There were others.
 
 Rob Kimberley
 
 I think it was actually Qualcomm.  The requirement for GPS time and 10 
 microsecond
 time synchronization (which informs the holdover spec) came from Qualcomm's 
 CDMA
 specification and are unique to it.  GSM, UMTS and (I think) LTE base 
 stations can
 get by without GPS at all.
 
 Dennis Ferguson
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Symmetricom TimeSource 2700

2011-11-30 Thread Dennis Ferguson
Yes, you are correct.  10 microseconds comes directly from the CDMA
spec, it is the amount of time the reference at a base station is allowed
to drift when it is in holdover before it is out of spec and needs to
be removed from service.

I still don't know what they do about path delay since (as you point
out) I believe this can be measured only after a handset has registered
with the tower, and the timing receivers never register.  And the path
delay can be quite large if you live far enough away from civilization.
When I take my Verizon phone to Toronto it often registers with a
Verizon tower which must be at least 20 miles away (i.e. the width
of the lake).  If that was the distance to the only tower the timing
receiver had to listen to that would be more than 100 microseconds of
delay, and I don't see how it could correct that.

Dennis Ferguson

On 30 Nov, 2011, at 02:42 , Peter Bell wrote:

 It's been a while, but from what I remember the sync channel message
 does indeed include the system time (which is the same as GPS time
 with a UTC offset) and also the PN code offset that this cell is
 using.  This leaves the only remaining unknown as the path delay to
 the cell and the possible error in the local clock on the BTS.
 
 The other possible source of error is that if one of the sites loses
 GPS lock, it will flywheel - this will generate a yellow alarm, but
 this is not communicated over the air interface - I suspect that the
 largest component of that stated 10uS maximum timing error is based on
 worse-case accumulated phase error.  I also suspect this is why that
 Symmetricom box is tracking multiple pilots, so it can isolate and
 discard any that appear to be significantly out.
 
 Regards,
 
 Pete
 
 
 On Wed, Nov 30, 2011 at 1:37 PM, Dennis Ferguson
 dennis.c.fergu...@gmail.com wrote:
 I think they track both the CDMA pilot and sync channels.  The latter
 channel sends a message which tells the phone about the cell, and
 gives gives the phone enough information to figure out the time of day.
 
 I'm pretty sure CDMA phones have to know what time it is before they
 register with the cell.  To receive the paging channel and negotiate a
 registration the phone has to receive and send the long code chip sequence,
 which I think is 2^40 bits long and takes more than a month to repeat.
 The phone has to know what time it is before it has any hope of tracking
 that.
 
 I don't know how (or if) they deal with the distance from the cell.  The
 accuracy of the PPS signal from CDMA time receivers is usually specified
 as no better than 10 microseconds or so, so they may just assume the cell
 tower is close enough not to make it worse than 10 microseconds.
 
 Dennis Ferguson
 
 On 29 Nov, 2011, at 18:54 , Peter Bell wrote:
 Assuming it's just tracking the CDMA pilots, the 1PPS output is likely
 not aligned with UTC.  The problem is that the pilot channel is just a
 PN sequence with no modulating data - so when you lock to it you can
 know that your local clock is 19200Hz * 64 chips/bit (1.228MHz) - but
 that's all you know.  Even the code phase doesn't tell you anything,
 since there are two unknowns - the first is the distance to the cell
 and the second is the code phase offset on this specific pilot (each
 BTS has it's modulating sequence offset by an integer multiple of 64
 chips to reduce mutual interference) - the second piece of information
 you can obtain by reading one of the overhead channels, but the first
 is basically not available just using a receiver (your phone can do
 it, since it can ask transmit back to the BTS and measure the round
 trip timing offset).
 
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Symmetricom TimeSource 2700

2011-11-29 Thread Dennis Ferguson
I think they track both the CDMA pilot and sync channels.  The latter
channel sends a message which tells the phone about the cell, and
gives gives the phone enough information to figure out the time of day.

I'm pretty sure CDMA phones have to know what time it is before they
register with the cell.  To receive the paging channel and negotiate a
registration the phone has to receive and send the long code chip sequence,
which I think is 2^40 bits long and takes more than a month to repeat.
The phone has to know what time it is before it has any hope of tracking
that.

I don't know how (or if) they deal with the distance from the cell.  The
accuracy of the PPS signal from CDMA time receivers is usually specified
as no better than 10 microseconds or so, so they may just assume the cell
tower is close enough not to make it worse than 10 microseconds.

Dennis Ferguson

On 29 Nov, 2011, at 18:54 , Peter Bell wrote:
 Assuming it's just tracking the CDMA pilots, the 1PPS output is likely
 not aligned with UTC.  The problem is that the pilot channel is just a
 PN sequence with no modulating data - so when you lock to it you can
 know that your local clock is 19200Hz * 64 chips/bit (1.228MHz) - but
 that's all you know.  Even the code phase doesn't tell you anything,
 since there are two unknowns - the first is the distance to the cell
 and the second is the code phase offset on this specific pilot (each
 BTS has it's modulating sequence offset by an integer multiple of 64
 chips to reduce mutual interference) - the second piece of information
 you can obtain by reading one of the overhead channels, but the first
 is basically not available just using a receiver (your phone can do
 it, since it can ask transmit back to the BTS and measure the round
 trip timing offset).


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PC time app

2011-11-26 Thread Dennis Ferguson

On 25 Nov, 2011, at 21:56 , Steve . wrote:
 I'm curious as to what folks are doing with PC's that require micro second
 accuracy for days or weeks or what have you.
 
 Any examples?
 
 Curious,
 Steve

I have a PCI-X board with an FPGA which implements a clock running
at 320 MHz.  The 320 MHz can be phase-locked to an external 5 or 10 MHz
frequency input, and the card also has 4 PPS inputs.  A transition on
a PPS input causes the FPGA to record a timestamp, with a precision of
not quite 3 ns, and deliver it to software via an interrupt.  The 10 MHz
and PPS outputs from my GPS receiver are synchronous, so once the board
clock is set it keeps the time of the GPS receiver without any further
adjustment.

The system (the OS is NetBSD, but with the kernel timekeeping replaced) computes
its time as a linear function of the CPU's cycle counter, which on my machines
seems to run at a constant 2.4 GHz.  I can get a sample timestamp (actually
a pair for them, the board-computer time comparison mechanism is the trickiest
part of the design) from the FPGA by doing a load from a card register, so
an 'rdtsc; load; rdtsc' gives me a sample offset between the computer's clock
and the card's clock with a constant systematic error which (arguably) should
be less than +/- 10 ns and with the board's precision of about 3 ns.

I get sample offsets at randomly jittered intervals which average to about
0.25 seconds, so I get about 4 offsets per second with about 3 ns of round-off
noise.  The processing of these reduces to a linear least squares fit (the 
y-value
is the offset, the x-value is the time of the sample with respect to the
computer's clock) after some sanity filtering.  The least squares fit gives
me a frequency error and a time offset error, along with confidence intervals
for each.  I adjust the computer's clock when either the frequency error or
the time offset becomes non-zero with 80% confidence.

Typically I find the result of this to be, very roughly, a clock adjustment
every 10 seconds, with a frequency adjustment on the order of 10^-9 and a
time adjustment on the order of 10 ns.  This is not perfectly reliable, of 
course;
if I leave the cover off the computer and cold-spray the computer's innards I 
can
drive the clock crazy, so it depends on temperature variations inside the case
being modest, or at least occurring relatively slowly compared to my offset 
sample
rate.  When left alone in a rack in a quiet room, however, I seldom see anything
bad happening, so I think it isn't dangerous to assert that the arrangement is
typically keeping the computer's clock within +/- 20 ns of the GPS receiver, 
with
worst case excursions being no worse than maybe +/- 50 ns.

This has a number of uses, but is particularly good for NTP and PTP development.
You can use a board in server synchronize the server's system clock to a GPS
receiver, and then use a board tracking the same GPS receiver in a client 
machine
to independently measure how well the software is managing the client machine's
clock.  This avoids having the NTP or PTP software grade its own homework.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Epoch rollover ?

2011-10-21 Thread Dennis Ferguson

On 21 Oct, 2011, at 11:53 , k4...@aol.com wrote:
 Bruce, the most common cause of a GPS receiver getting the date incorrect is 
 due to cross-correlation.  And cross-correlation is usually the result of too 
 much gain in the GPS antenna's LNA.  And depending on the make of the 
 receiver, some will clear the date information with a power cycling and 
 others require erasing the flash memory where those parameters are stored.  
 Do you have a way to see the relative C/No reading for your receiver?  Most 
 receivers start experiencing cross-correlation when this reading exceeds 50. 
 Regards,  Doug, K4CLE…

That might be true in general, but seems exceedingly unlikely to be the
problem when the date error has a magnitude of exactly 1024 weeks.  The
problem is that while GPS tells you the time within a 1024 week era
it provides no information about which 1024 week era we're in so even
perfectly received GPS signals don't tell the unit what is needed to fix
that.  Either the receiver needs a separate source of time information
to determine the era, or it needs to implement a heuristic to guess at
it from the data it does have.

Some of the heuristics I've heard of are these (or maybe combinations of these):

- Assume the date must be more recent than when the firmware was compiled.
  By itself this leaves the device with a 1024 week rollover problem, but
  the 1024 weeks are counted from the date the firmware was compiled rather
  than from the GPS epoch.

- Assume the date must be more recent than the last date on which the unit
  saved restart information (ephemerides, etc.) in its flash memory, or
  whatever non-volatile storage it uses for this.  This may fail if the
  unit's flash memory fails.

- Guess at the era based on the leap second count (i.e. the UTC offset)
  informed by some expectation of how many leap seconds are likely to
  occur in each 1024 week era.  This would be an excellent heuristic if
  the rate of leap second insertion was relatively predicable over long
  periods, but as it has turned out the relative dearth of leap seconds
  in the past dozen years might cause particular implementations of this
  guestimate to fail now.

I don't know if any of this matters in the current case.  The last (which
I think was considered the best practice, or at least the best that
could be made of a bad situation, at the time the GPS rollover occurred)
is of particular interest right now since it suggests that ending UTC
leap seconds might eventually have some unintended consequences for boat
anchor GPS timing equipment.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Epoch rollover?

2011-10-19 Thread Dennis Ferguson

On 19 Oct, 2011, at 14:18 , Bruce Lane wrote:
   Did we just have another GPS epoch rollover? My trusty old Odetics 425
 seems to believe the date is March 4th, 1992.
 
   I could probably correct it in firmware, if I looked hard and long
 enough, but the ToD is still correct and the frequency standard is
 staying nicely locked. Not sure if recovering the correct Julian date is
 worth the effort.

That's neat.  I understand that the typical heuristics used to determine
the GPS epoch included one or more of:

a) Assume the time must be more recent then the date the firmware was compiled;

b) Assume the time must be more recent than the last time it dumped the
   ephemerides out to flash to speed reacquisition across a reboot (assuming
   it does that); or

c) Take a guess based on the leap second count (i.e. the UTC offset) and some
   expectation of the number of leap seconds per epoch.

If your unit does a) it may be that the units firmware just passed 1024 weeks
of age.  If it does b) it may be your flash memory (or whatever non-volatile
storage it keeps that stuff in) has died.  It it does c) it may be that the
relative dearth of leap seconds in the past dozen years has fooled that
algorithm.  Or it could be something else entirely.

It would be interesting to know which of those it is.

Dennis Ferguson



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Cable delay correction for Tbolt Cs substitude

2011-10-16 Thread Dennis Ferguson

On 16 Oct, 2011, at 11:21 , WarrenS wrote:
 Does anyone ever add a temperature controller on the antenna? Maybe that 
 should be my next test.

I've seen commercial temperature-controlled antennas.  Here's one:

http://adriang.com/AACE-Industries/products.htm

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] DCF77 question

2011-10-12 Thread Dennis Ferguson

On 12 Oct, 2011, at 16:03 , asma...@fc.up.pt wrote:
 QUOTE:
 
 DCF77 marks seconds by reducing carrier power
 for an interval beginning on the second.
 
 UNQUOTE
 
 This is not a good (Heaviside step) time marker.

For what it is worth, the amplitude modulated on-time
second pulse isn't the only time marker DCF77 transmits,
it also transmits a phase modulated 512 bit pseudo-random
sequence in the remainder of the second.  See

http://http://en.wikipedia.org/wiki/DCF77

or, for the best technical description I've seen, google

Zeit- und Normalfrequenzverbreitung mit DCF77

The phase modulation is much, much better than the
amplitude modulation for a pile of reasons.

 Is it possible to decode that signal by any very
 simple on-my-shelf gear, as for instance a PC
 sound card,to recover a good seconds time mark?
 
 Do you know some software to perform the above task?

I don't yet.  I've been playing with DCF77 with a high-zoot
RFspace DSP receiver located in Toronto, however, and have
found that even though the waterfall display shows only noise
at 77.5 kHz, so there's no chance of measuring the amplitude
modulated time marker, I can usually detect the phase modulation
sequence in the noise for quite a few hours per day using the
brute force approach of convolution with an  800 ms long
filter matched to the sequence.  When I find the time to
actually finish this I want to begin recording propagation
delays when the signal is detectable.

If you end up wanting to build your own detector I'd highly
recommend chasing the phase modulated sequence rather than
the amplitude modulated second marker.  BPC on 68.5 kHz transmits
a similar phase modulated code (according to a PTTI paper they
presented) which there may be some hope of detecting on the
North America west coast.  While they don't publish the details,
I've been planning to take the receiver and an antenna with me
when I next go to Hong Kong and see if I can record enough
good quality signal to eventually reverse engineer the
encoding out of it.

Dennis Ferguson

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] The future of UTC

2011-07-18 Thread Dennis Ferguson

On 18 Jul 2011, at 05:23 , Tony Finch wrote:

 Jose Camara camar...@quantacorp.com wrote:
 
 I think before adding to the fire of UTC1, UTC7 etc. why not just abolish
 this silliness called Daylight Savings Time?  If there is any benefit to it,
 just change business operating hours instead.
 
 If you want to know why your suggestion doesn't work, David Prerau has
 collected many many examples. http://www.seizethedaylight.com/

Yet most of the people on the planet live in a place where DST is
not observed now, and that includes people living as far north as 65 degrees
latitude and as far south as 55 degrees.  Should they all be told this doesn't
work?

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.