Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread Gary E. Miller
Yo Mark!

On Tue, 19 Jul 2016 21:29:24 +
Mark Sims  wrote:

> The NMEA sentences for sending date and time were very poorly thought
> out.  Several different sentences can contain the time (maybe the
> same time in different sentences with a group,  maybe each sentence
> has a different time).   Only the ZDA and RMC sentences have the
> date.  There is no sentence with unified time/date, position, and
> velocity info.

Yup, when you are talking NMEA.  There are a number of proprietary
binary messages that do.  Most of them are badly thought out in
different ways.

In practice, there is often no advantage to the binary over the NMEA
when gpsd is done with it.

RGDS
GARY
---
Gary E. Miller Rellim 109 NW Wilmington Ave., Suite E, Bend, OR 97703
g...@rellim.com  Tel:+1 541 382 8588


pgpxt1U9s5dqD.pgp
Description: OpenPGP digital signature
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread Scott Stobbe
Very true. If your exchanging data for loose timing purposes (wall clock)
over a UART whether it is binary coded or ASCII coded is immaterial. You
will always introduce at least 1/16 of a bit time of jitter, if not one
full bit time, allowing the uart to sync to start bit edge.

In full honesty I have never read the official NMEA-0183 specification.
Just a sample from an unoffical NMEA0183.pdf

*ZDA Time & Date – UTC, Day, Month, Year and Local Time Zone*
*$--ZDA,hhmmss.ss,xx,xx,,xx,xx*hh*

So for a GPS or a timing unit on a NMEA-0183 bus, do they report their best
estimate of time to the nearest 10ms when the packet is sent? (since a PPS
line isn't shared)

On Tue, Jul 19, 2016 at 1:17 PM, Chris Albertson 
wrote:

> On Mon, Jul 18, 2016 at 10:44 AM, Scott Stobbe 
> wrote:
> > I am happy to hear you issue was resolved. What I meant to say is the
> > problem could also be mitigated using the UART's flow control, this could
> > be done by the original GPS designers or by an end user if the CTS line
> is
> > pined out.
>
> The original GPS designers where sending NMEA-0183 data out to devices
> that accept and use NMEA data.   GPS was not the first to work with
> NMEA.  Lots of other instruments also output NMEA.   I doubt they ever
> would have envisioned people using NMEA data for precision timing.
> Typically if you want to do precision timing you's use a GPS that
> outputs some binary data format.  NMEA-0183 does not have flow control
>
> --
>
> Chris Albertson
> Redondo Beach, California
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread Chris Albertson
On Mon, Jul 18, 2016 at 10:44 AM, Scott Stobbe  wrote:
> I am happy to hear you issue was resolved. What I meant to say is the
> problem could also be mitigated using the UART's flow control, this could
> be done by the original GPS designers or by an end user if the CTS line is
> pined out.

The original GPS designers where sending NMEA-0183 data out to devices
that accept and use NMEA data.   GPS was not the first to work with
NMEA.  Lots of other instruments also output NMEA.   I doubt they ever
would have envisioned people using NMEA data for precision timing.
Typically if you want to do precision timing you's use a GPS that
outputs some binary data format.  NMEA-0183 does not have flow control

-- 

Chris Albertson
Redondo Beach, California
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread David
On Tue, 19 Jul 2016 06:16:16 -0700, you wrote:

>On 7/18/16 9:44 PM, David wrote:
>> The aged 16550 has various timeouts so an interrupt is triggered with
>> a partially full buffer even if it is below the interrupt threshold.
>> For implementations which do not do that, I assume they intend for the
>> UART to be polled regularly.
>>
>exactly... you have some sort of blocking read that waits either for an 
>interrupt or for time to expire

Oh, from the application program interface?  Ya, that would be a
problem if it lacks a non-blocking read.  The UART itself has a status
flag which says if there is data available to be read but if you
cannot access that, then you have to wait for the UART's interrupt
timeout assuming it has one.

I seem to recall this issue coming up long ago in connection with
dodgy 16550 implementations where data was getting stuck below the
interrupt threshold but I never encountered it myself.  For the lower
level programming I have done, it was never an issue since I had
direct access to the hardware and could check the flags anytime I
wanted.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread Martin Burnicki
John Ackermann N8UR wrote:
> Long ago I measured the impact of the linux low_latency flag on a 16550 UART. 
>  I don't know where that data is sitting now, but I remember that it made a 
> significant difference.
>
>> On Jul 18, 2016, at 9:59 PM, Hal Murray  wrote:
>>
>>
>> jim...@earthlink.net said:
>>> except that virtually every UART in use today has some sort of buffering
>>> (whether a FIFO or double buffering) between the CPU interface and the  bits
>>> on the wire, which completely desynchronizes the bits on the wire  from the
>>> CPU interface.
>>
>> The idea was to reduce the CPU load processing interrupts by batching things 
>> up.
>>
>> Some of those chips generate an interrupt when the see a return or line-feed 
>> character.
>>
>> Most of them have an option to disable that batching.  On Linux the 
>> setserial 
>> command has a low_latency option.  I haven't measured the difference.  It 
>> would be a fun experiment.

AFAIK the low_latency flag just sets the UART's FIFO threshold to 1,
i.e. the UART generates an IRQ when the 1st character came in. If you
don't set this flag then the FIFO threshold is set to something different.

A *very* quick search on the Linux source code seems to indicate the
default threshold is 16 in current kernels, but if I remember correctly
then it was 4 or 8 in earlier kernel versions.

If you need to timestamp the 1st character of the serial time string
then things are easy. For example, for Meinberg time strings the
on-rtime character is the 1st character, STX (0x02), and subsequent
characters are sent without gap. So it doesn't matter much if you get an
IRQ after the 1st character, and compensate for 1 character only, or the
IRQ occurs after the 8th character, and you compensate for 8 characters.
But of course you need to know the current FIFO threshold.

I think if you need to timestamp e.g. the CR of LF at the end of a
string which eventually has even variable length then the timing may
vary depending on the actual string length.

E.g., with a FIFO threshold of 16 the first IRQ is generated when 16
characters have been received, but if the whole string is e.g. only 30
characters then only 14 characters follow after the first part, and the
FIFO threshold (16) is never reached by that single string. I'm not sure
if the UART then generates an IRQ anyway after some kind of timeout, but
this seems to make exact timing quite a bit more tricky.

Martin

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread jimlux

On 7/18/16 9:44 PM, David wrote:

The aged 16550 has various timeouts so an interrupt is triggered with
a partially full buffer even if it is below the interrupt threshold.
For implementations which do not do that, I assume they intend for the
UART to be polled regularly.

exactly... you have some sort of blocking read that waits either for an 
interrupt or for time to expire



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread John Ackermann N8UR
Long ago I measured the impact of the linux low_latency flag on a 16550 UART.  
I don't know where that data is sitting now, but I remember that it made a 
significant difference.

> On Jul 18, 2016, at 9:59 PM, Hal Murray  wrote:
> 
> 
> jim...@earthlink.net said:
>> except that virtually every UART in use today has some sort of buffering
>> (whether a FIFO or double buffering) between the CPU interface and the  bits
>> on the wire, which completely desynchronizes the bits on the wire  from the
>> CPU interface.
> 
> The idea was to reduce the CPU load processing interrupts by batching things 
> up.
> 
> Some of those chips generate an interrupt when the see a return or line-feed 
> character.
> 
> Most of them have an option to disable that batching.  On Linux the setserial 
> command has a low_latency option.  I haven't measured the difference.  It 
> would be a fun experiment.
> 
> 
> -- 
> These are my opinions.  I hate spam.
> 
> 
> 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-19 Thread David J Taylor

From: Scott Stobbe

I am happy to hear you issue was resolved. What I meant to say is the
problem could also be mitigated using the UART's flow control, this could
be done by the original GPS designers or by an end user if the CTS line is
pined out. Gating the UART with a conservative delay, say 500 ms from the
time mark or PPS signal. The serial string would just sit in the transmit
buffer until the fixed delay expires and the UART starts transmitting.
==

It was a Garmin problem, and they released updated firmware after some 
badgering.


Again, it's nothing to do with serial communication, but CPU loading.

Many GPS devices can only just send the default information at default speed 
(typically 4800/9600), so the UART would need quite a lot of buffering for 
such a scheme to work.  For any serious use, the PPS line is the way to go.


Cheers,
David
--
SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-tay...@blueyonder.co.uk
Twitter: @gm8arv 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread David
The aged 16550 has various timeouts so an interrupt is triggered with
a partially full buffer even if it is below the interrupt threshold.
For implementations which do not do that, I assume they intend for the
UART to be polled regularly.

On Mon, 18 Jul 2016 23:42:34 -0400, you wrote:

>I can't speak for linux, but I have been bitten by FIFO watermark
>interrupts on micros before. If you set an interrupt for a 3/4 full FIFO,
>the last one or two characters will sit in the receive buffer and never
>trigger the RX interrupt. For a command -> response device which doesn't
>have a constant data-stream, It isn't until a PC application resends the
>command sequence that enough characters are in the FIFO to trigger the
>watermark interrupt. The easy fix was to interrupt on any single character
>being available, but would have been nice to have a timeout interrupt on a
>partially full FIFO.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Scott Stobbe
Bob, Thanks for the nudge towards Ublox. USB was a bit of blue sky thinking.

I took a look at the LEA-M8F module which by default includes a VCTCXO.
Will also take care of disciplining an (VC)OCXO provided a DAC is included.
What was interesting to me is you can "exchange" time it, copied from the HW
Integration Manual
<https://www.u-blox.com/sites/default/files/products/documents/LEA-M8F_HIM_%28UBX-1434%29.pdf>

*1.6.3 FREQ_PHASE_IN0 / EXINT0, FREQ_PHASE_IN1 / EXTINT1 These two
frequency/phase inputs are provided for connecting an external source of
phase (pulse stream) or frequency reference into the module. The pulse
stream can be derived from a frequency reference or external
synchronization source. The module will measure and report the phase or
frequency offset of this input with respect to the current synchronization
source and optionally steer the related oscillator to bring the externally
derived pulses into alignment*


On Mon, Jul 18, 2016 at 7:28 PM, Bob Stewart <b...@evoria.net> wrote:

> Interestingly enough, the Ublox LEA series of timing receivers has a USB
> port which you can connect directly to a USB cable.  Of course you want to
> use an ESD device, etc.
> Bob
>  -
> AE6RV.com
>
> GFS GPSDO list:
> groups.yahoo.com/neo/groups/GFS-GPSDOs/info
>
>   From: jimlux <jim...@earthlink.net>
>  To: time-nuts@febo.com
>  Sent: Monday, July 18, 2016 5:46 PM
>  Subject: Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)
>
> On 7/18/16 1:44 PM, Scott Stobbe wrote:
> > Well, I suppose in the case of USB, the host hardware (consumer PC) is
> not
> > going to have any special hardware. But, if a gps receiver implements a
> USB
> > interface, in addition to standard NEMA data, it could also report the
> > phase and frequency error of your USB clock (since it has to recover it
> > anyways to get the usb data).
>
> The USB interface timing is going to be buried deep, deep inside some
> microcontroller or ASIC. Imagine a FTDI part, for instance.  I can't
> imagine a GPS mfr caring enough about this to spend any money on trying
> to figure out how to do it.
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
>
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Scott Stobbe
I can't speak for linux, but I have been bitten by FIFO watermark
interrupts on micros before. If you set an interrupt for a 3/4 full FIFO,
the last one or two characters will sit in the receive buffer and never
trigger the RX interrupt. For a command -> response device which doesn't
have a constant data-stream, It isn't until a PC application resends the
command sequence that enough characters are in the FIFO to trigger the
watermark interrupt. The easy fix was to interrupt on any single character
being available, but would have been nice to have a timeout interrupt on a
partially full FIFO.

On Mon, Jul 18, 2016 at 9:59 PM, Hal Murray  wrote:

>
> jim...@earthlink.net said:
> > except that virtually every UART in use today has some sort of buffering
> > (whether a FIFO or double buffering) between the CPU interface and the
> bits
> > on the wire, which completely desynchronizes the bits on the wire  from
> the
> > CPU interface.
>
> The idea was to reduce the CPU load processing interrupts by batching
> things
> up.
>
> Some of those chips generate an interrupt when the see a return or
> line-feed
> character.
>
> Most of them have an option to disable that batching.  On Linux the
> setserial
> command has a low_latency option.  I haven't measured the difference.  It
> would be a fun experiment.
>
>
> --
> These are my opinions.  I hate spam.
>
>
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Hal Murray

jim...@earthlink.net said:
> except that virtually every UART in use today has some sort of buffering
> (whether a FIFO or double buffering) between the CPU interface and the  bits
> on the wire, which completely desynchronizes the bits on the wire  from the
> CPU interface. 

The idea was to reduce the CPU load processing interrupts by batching things 
up.

Some of those chips generate an interrupt when the see a return or line-feed 
character.

Most of them have an option to disable that batching.  On Linux the setserial 
command has a low_latency option.  I haven't measured the difference.  It 
would be a fun experiment.


-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Bob Stewart
Interestingly enough, the Ublox LEA series of timing receivers has a USB port 
which you can connect directly to a USB cable.  Of course you want to use an 
ESD device, etc.
Bob
 -
AE6RV.com

GFS GPSDO list:
groups.yahoo.com/neo/groups/GFS-GPSDOs/info

  From: jimlux <jim...@earthlink.net>
 To: time-nuts@febo.com 
 Sent: Monday, July 18, 2016 5:46 PM
 Subject: Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)
   
On 7/18/16 1:44 PM, Scott Stobbe wrote:
> Well, I suppose in the case of USB, the host hardware (consumer PC) is not
> going to have any special hardware. But, if a gps receiver implements a USB
> interface, in addition to standard NEMA data, it could also report the
> phase and frequency error of your USB clock (since it has to recover it
> anyways to get the usb data).

The USB interface timing is going to be buried deep, deep inside some 
microcontroller or ASIC. Imagine a FTDI part, for instance.  I can't 
imagine a GPS mfr caring enough about this to spend any money on trying 
to figure out how to do it.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


  
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread jimlux

On 7/18/16 1:44 PM, Scott Stobbe wrote:

Well, I suppose in the case of USB, the host hardware (consumer PC) is not
going to have any special hardware. But, if a gps receiver implements a USB
interface, in addition to standard NEMA data, it could also report the
phase and frequency error of your USB clock (since it has to recover it
anyways to get the usb data).


The USB interface timing is going to be buried deep, deep inside some 
microcontroller or ASIC. Imagine a FTDI part, for instance.  I can't 
imagine a GPS mfr caring enough about this to spend any money on trying 
to figure out how to do it.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread jimlux

On 7/18/16 12:35 PM, David wrote:

On Mon, 18 Jul 2016 11:43:32 -0700, you wrote:


except that virtually every UART in use today has some sort of buffering
(whether a FIFO or double buffering) between the CPU interface and the
bits on the wire, which completely desynchronizes the bits on the wire
from the CPU interface.

Determinism in UART timing between the CPU bus interface and the "bits
on the wire" has never been something that is specified.  You can go
back to venerable parts like the 8251, and there's no spec in the data
sheet.
( there's a tCR specified as 16 tCY for the read setup time from CTS*,
DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for
the setup and hold of the internal sampling pulse relative to RxD. And
20 tCY as a max from center of stop bit to RxRDY, and then whatever the
delay is from the internal RxRDY to the bus read)


Long ago I remember seeing a circuit design or application note using
an 8250 or similar where the UART start bit was gated so that the
leading edge could be used for precision synchronization.


And it probably depended on idiosyncratic behavior of the 8250 and 
fooling with the transmit clock input to the chip.   That is, a part 
that claimed "8250 emulation" may or may not work the same.  Sort of 
like Printer ports on IBM PCs.. they'd all work with a unidirectional 
Centronics printer, some would work as a bidirectional port, some wouldn't.


As soon as you get to parts that have the baudrate generator internally 
or which are highly integrated multiprotocol chips (like the Zilog do 
everything dual serial port) it gets much trickier.


I had a terrible time a couple years ago getting a synchronous RS422 
interface (1 pair with clock at symbol rate + 1 pair with data) that 
would easily interface to a PC.  Most of the "synchronous RS422" 
interfaces out there use one of the multiprotocol chips which support 
BiSync, HDLC, etc. and they try to find sync characters or stuff flags, 
etc. but not very many support "raw synchronous"










___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Scott Stobbe
Well, I suppose in the case of USB, the host hardware (consumer PC) is not
going to have any special hardware. But, if a gps receiver implements a USB
interface, in addition to standard NEMA data, it could also report the
phase and frequency error of your USB clock (since it has to recover it
anyways to get the usb data).

I don't have the answer here, but usb-audio ICs suffer similar problems in
clock distribution. The gist of it seems to be locking on the hosts SOF
packets. PROGRAMMABLE CLOCK GENERATION AND SYNCHRONIZATION FOR USB AUDIO
SYSTEMS 

On Mon, Jul 18, 2016 at 2:43 PM, jimlux  wrote:

> On 7/18/16 8:51 AM, Scott Stobbe wrote:
>
>> I suppose it is one of those cases where, the GPS designers decided you
>> shouldn't ever use the serial data for sub-second timing, and consequently
>> spent no effort on serial latency and jitter.
>>
>> Most UARTs I have come across have been synthesized with a 16x baud clock
>> and included flow control. It would not have been too much effort to spec
>> latency as some mu ±100 ns and jitter of ±1/(16*baud).
>>
>> For 9600 baud, the jitter on the start bit would be ±6.5 us.
>>
>> If CTS was resampled a 1 full bit time (9600 baud), the jitter would
>> be ±104 us.
>>
>>
>
> except that virtually every UART in use today has some sort of buffering
> (whether a FIFO or double buffering) between the CPU interface and the bits
> on the wire, which completely desynchronizes the bits on the wire from the
> CPU interface.
>
> Determinism in UART timing between the CPU bus interface and the "bits on
> the wire" has never been something that is specified.  You can go back to
> venerable parts like the 8251, and there's no spec in the data sheet.
> ( there's a tCR specified as 16 tCY for the read setup time from CTS*,
> DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for the
> setup and hold of the internal sampling pulse relative to RxD. And 20 tCY
> as a max from center of stop bit to RxRDY, and then whatever the delay is
> from the internal RxRDY to the bus read)
>
>
> There's "what we observed in a running circuit" or "what we inferred from
> knowing the internal design".
>
>
> Since a huge number of serial ports these days are implemented with a USB
> interface, the timing uncertainty is even greater, because you're dealing
> with the 8kHz frame timing on USB.
>
>
> This is why PTP compatible interfaces added time tagging to the PHY layer.
>
>
>
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread David
On Mon, 18 Jul 2016 11:43:32 -0700, you wrote:

>except that virtually every UART in use today has some sort of buffering 
>(whether a FIFO or double buffering) between the CPU interface and the 
>bits on the wire, which completely desynchronizes the bits on the wire 
>from the CPU interface.
>
>Determinism in UART timing between the CPU bus interface and the "bits 
>on the wire" has never been something that is specified.  You can go 
>back to venerable parts like the 8251, and there's no spec in the data 
>sheet.
>( there's a tCR specified as 16 tCY for the read setup time from CTS*, 
>DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for 
>the setup and hold of the internal sampling pulse relative to RxD. And 
>20 tCY as a max from center of stop bit to RxRDY, and then whatever the 
>delay is from the internal RxRDY to the bus read)

Long ago I remember seeing a circuit design or application note using
an 8250 or similar where the UART start bit was gated so that the
leading edge could be used for precision synchronization.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Scott Stobbe
I am happy to hear you issue was resolved. What I meant to say is the
problem could also be mitigated using the UART's flow control, this could
be done by the original GPS designers or by an end user if the CTS line is
pined out. Gating the UART with a conservative delay, say 500 ms from the
time mark or PPS signal. The serial string would just sit in the transmit
buffer until the fixed delay expires and the UART starts transmitting.

On Mon, Jul 18, 2016 at 12:19 PM, David J Taylor <
david-tay...@blueyonder.co.uk> wrote:

> I suppose it is one of those cases where, the GPS designers decided you
> shouldn't ever use the serial data for sub-second timing, and consequently
> spent no effort on serial latency and jitter.
>
> Most UARTs I have come across have been synthesized with a 16x baud clock
> and included flow control. It would not have been too much effort to spec
> latency as some mu ±100 ns and jitter of ±1/(16*baud).
>
> For 9600 baud, the jitter on the start bit would be ±6.5 us.
>
> If CTS was resampled a 1 full bit time (9600 baud), the jitter would
> be ±104 us.
> ==
>
> Scott,
>
> You're right about the design priorities (and we have had to take Garmin
> to task on this, but they did fix the problem), but it's not the UART which
> is the major problem, but that the tiny CPU inside is taking a variable
> amount of time to have the serial data ready.  We're talking tens, possibly
> hundreds of milliseconds peak-to-peak jitter.
>
> Cheers,
> David
> --
> SatSignal Software - Quality software written to your requirements
> Web: http://www.satsignal.eu
> Email: david-tay...@blueyonder.co.uk
> Twitter: @gm8arv
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Bob Camp
Hi

> On Jul 18, 2016, at 12:19 PM, David J Taylor  
> wrote:
> 
> I suppose it is one of those cases where, the GPS designers decided you
> shouldn't ever use the serial data for sub-second timing, and consequently
> spent no effort on serial latency and jitter.
> 
> Most UARTs I have come across have been synthesized with a 16x baud clock
> and included flow control. It would not have been too much effort to spec
> latency as some mu ±100 ns and jitter of ±1/(16*baud).
> 
> For 9600 baud, the jitter on the start bit would be ±6.5 us.
> 
> If CTS was resampled a 1 full bit time (9600 baud), the jitter would
> be ±104 us.
> ==
> 
> Scott,
> 
> You're right about the design priorities (and we have had to take Garmin to 
> task on this, but they did fix the problem), but it's not the UART which is 
> the major problem, but that the tiny CPU inside is taking a variable amount 
> of time to have the serial data ready.  We're talking tens, possibly hundreds 
> of milliseconds peak-to-peak jitter.


….. but …. 

It’s been a long time since 9600 baud was a fast baud rate. It is pretty common 
these
days to run at 115K baud on something like this. Indeed a number of GPS modules 
will only
run at that speed or faster if you want the full feature set to work. Most 
modern modules 
will run much faster than 115K if you want them to. The simple fact that they 
need the higher
baud rate to get all the data out forces a better serial i/o approach in a 
modern module. 

In order for sawtooth correction to work, the relation of the serial message to 
the pps
needs to be pretty well defined. It either is talking about the *next* pps or 
about the
*prior* pps edge. If it is ambiguous relative to the pps, you can not be sure 
of what it
is relating to. 

If the module has a pps out and has sawtooth correction (or uses the same code 
base
as one that does), the serial timing string is not going to be all over the 
place. They no 
longer are running itty bitty CPU’s in these things. ARM’s running at >= 400 MHz
are the typical approach these days.  Running out of clock cycles to get it all
done went away at least 5 years ago and more like 10 years for the “usual 
suspects” that
you see in timing applications. 

Can you still find a 20 or 30 year old module on eBay that has issues? Sure you 
can. It’s
not what I would call a modern part, even if it is being sold as “new in box”. 
Can you find
modules that simply do not keep time at all? Sure you can. That’s not the 
serial port’s fault. 
It’s the fact that that specific module is broke. Don’t use that one, move on. 

Bob

> 
> Cheers,
> David
> -- 
> SatSignal Software - Quality software written to your requirements
> Web: http://www.satsignal.eu
> Email: david-tay...@blueyonder.co.uk
> Twitter: @gm8arv 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread jimlux

On 7/18/16 8:51 AM, Scott Stobbe wrote:

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.




except that virtually every UART in use today has some sort of buffering 
(whether a FIFO or double buffering) between the CPU interface and the 
bits on the wire, which completely desynchronizes the bits on the wire 
from the CPU interface.


Determinism in UART timing between the CPU bus interface and the "bits 
on the wire" has never been something that is specified.  You can go 
back to venerable parts like the 8251, and there's no spec in the data 
sheet.
( there's a tCR specified as 16 tCY for the read setup time from CTS*, 
DSR* to READ* assert.  And tSRX (2 usec min) and tHRX (2 usec min) for 
the setup and hold of the internal sampling pulse relative to RxD. And 
20 tCY as a max from center of stop bit to RxRDY, and then whatever the 
delay is from the internal RxRDY to the bus read)



There's "what we observed in a running circuit" or "what we inferred 
from knowing the internal design".



Since a huge number of serial ports these days are implemented with a 
USB interface, the timing uncertainty is even greater, because you're 
dealing with the 8kHz frame timing on USB.



This is why PTP compatible interfaces added time tagging to the PHY layer.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread David J Taylor

I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.
==

Scott,

You're right about the design priorities (and we have had to take Garmin to 
task on this, but they did fix the problem), but it's not the UART which is 
the major problem, but that the tiny CPU inside is taking a variable amount 
of time to have the serial data ready.  We're talking tens, possibly 
hundreds of milliseconds peak-to-peak jitter.


Cheers,
David
--
SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-tay...@blueyonder.co.uk
Twitter: @gm8arv 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Scott Stobbe
I suppose it is one of those cases where, the GPS designers decided you
shouldn't ever use the serial data for sub-second timing, and consequently
spent no effort on serial latency and jitter.

Most UARTs I have come across have been synthesized with a 16x baud clock
and included flow control. It would not have been too much effort to spec
latency as some mu ±100 ns and jitter of ±1/(16*baud).

For 9600 baud, the jitter on the start bit would be ±6.5 us.

If CTS was resampled a 1 full bit time (9600 baud), the jitter would
be ±104 us.

On Sat, Jul 16, 2016 at 3:13 PM, Mark Sims  wrote:

> I just added some code to Lady Heather to record and plot the time that
> the timing message arrived from the receiver (well, actually the time that
> the screen update routine was called,  maybe a few microseconds
> difference).I am using my existing GetMsec() routine which on Windoze
> actually has around a 16 msec granularity.  The Linux version uses the
> Linux nanosecond clock (divided down to msec resolution).  I just started
> testing it on a Ublox 8M in NMEA and binary message mode...  really
> surprising results to come shortly...
>
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread Martin Burnicki
Mark, Chris,

Chris Albertson wrote:
> Can't you take care of this in the build system?  I never go near
> Windows, the last version I used was Win 95.  But on other systems I
> always use something like the GNU Auto tools cmake or whatever and
> part of the process is to check for the availability of each system
> call and library and then the source is built using what's on that
> specific machine.   I'd guess that there is something like this in
> Windows.  Is GNU Autoconf ported to Windows?  If so then use
> QueryPerformanceCounter() if it is available.  It seems much cleaner
> to that care of this kind off thing in the build process

The QueryPerformanceCounter() (QPC) call is available on all Windows
Versions since Windows NT. I'm not sure if it was supported on Windows
9x, though. The windows 9x versions were more like DOS with a graphical
user interface.

QPC is implemented in the Windows Hardware Abstraction Layer (HAL). At
least Windows versions around XP were shipped with different versions of
the HAL DLL, and the Windows installer determined during installation
which version to use. The different versions used different timers on
the particular PC, and depending on the timer which was actually used
(TSC, HPET, PMTIMER, ...) the QPC call worked with different clock
frequencies and thus provided different resolution.

When Windows XP was current then current CPU types both from Intel and
AMD had problems with the TSC since the TSC clock frequency could change
when the CPU clock frequency changed due to power savings, and TSCs
might not have been synchronized across different cores in the same
physical CPU.

This is why you could force Windows XP always to use the PMTIMER which
is part of the ACPI support chipset, and if I remember correctly the SP3
for Windows XP did this automatically to avoid problems with the TSC.
You can use the QueryPerformanceFrequency() call to determine the clock
frequency of the timer used for QPC, and the frequency typically tells
you which timer/counter circuit or the PC it actually is.

One important point is that the TSC can be read very much faster than
one of the other timers/counters since its just reading a CPU register,
while other circuits are part of the chip set and need to be accessed
via a peripheral bus.

Modern Windows versions determine much more reliably if the TSC can be
used without problems, or not, and use it, if appropriate.

Modern Windows versions (Windows 8 an newer) also provide some new API
calls which return the system time with higher resolution/precision than
original API calls.

For example, the original API call GetSystemTimeAsFileTime() only had a
coarse resolution of 0.5 to ~ 16 ms, depending on the Windows version
and some conditions. Now there's a new API call
GetSystemTimePreciseAsFileTime() which always provides 100 ns
precision/resolution. Similar with some other call for which there are
"Precise" variant available now.

A common practice is to check at runtime if a "Precise" call is
supported by the OS version under which the application is currently
running.

For example, at program startup try to import the symbol
GetSystemTimePreciseAsFileTime and set up a function pointer with it. If
the symbol can't be imported (e.g. if running on windows XP) set the
pointer to GetSystemTimeAsFileTime, and get system time stamps only by
calls via that pointer. So you have a single executable which benefits
from the "Precise" call if available, and falls back to the standard
call if it's not available.

This page
https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx

provides a good overview of the available functions.

Martin

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-18 Thread David J Taylor
You can also use the QueryPeformanceCounter and related functions for 
better precision.


From: Mark Sims

Heather's gotta work with XP (and maybe Win98)...  too many people 
(including me) run it on old trashy laptops, so no fancy pants new fangled 
Windoze calls allowed...


In the past I've avoided the use of QueryPerformanceCounter due to potential 
issues with AMD processors, multi-core processors and multi-processor 
systems,  inaccurate/invalid reported CPU clock frequency (TSC tick count 
divisor) values,  variable clock rate systems, etc.   I'm now back to using 
it, but have added an option for switching back to GetTickCount() and it's 
16 msec granularity.  I'm getting very good results so far.

___

Mark,

You can easily use the new functions if they are available simply by asking 
Kernel32.dll whether it knows about them.  If not, use the old function, if 
so, use the new.  The result from old and new is identical in format, just 
better in precision in the newer.




==
var
 FPreciseFT: procedure (var lpSystemTimeAsFileTime: TFileTime); stdcall;

begin
 FKernel32 := LoadLibrary ('kernel32.dll');
 if FKernel32 <> 0 then
   begin
   FPreciseFT := GetProcAddress (FKernel32, 
'GetSystemTimePreciseAsFileTime');

   if @FPreciseFT = nil then
 FPreciseFT := GetProcAddress (FKernel32, 'GetSystemTimeAsFileTime');
   FreeLibrary (FKernel32);
   end;
end.
==

Cheers,
--
SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-tay...@blueyonder.co.uk
Twitter: @gm8arv 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-17 Thread Chris Albertson
Can't you take care of this in the build system?  I never go near
Windows, the last version I used was Win 95.  But on other systems I
always use something like the GNU Auto tools cmake or whatever and
part of the process is to check for the availability of each system
call and library and then the source is built using what's on that
specific machine.   I'd guess that there is something like this in
Windows.  Is GNU Autoconf ported to Windows?  If so then use
QueryPerformanceCounter() if it is available.  It seems much cleaner
to that care of this kind off thing in the build process

On Sun, Jul 17, 2016 at 2:23 PM, Mark Sims  wrote:
> Heather's gotta work with XP (and maybe Win98)...  too many people (including 
> me) run it on old trashy laptops, so no fancy pants new fangled Windoze calls 
> allowed...
>
> In the past I've avoided the use of QueryPerformanceCounter due to potential 
> issues with AMD processors, multi-core processors and multi-processor 
> systems,  inaccurate/invalid reported CPU clock frequency (TSC tick count 
> divisor) values,  variable clock rate systems, etc.   I'm now back to using 
> it, but have added an option for switching back to GetTickCount() and it's 16 
> msec granularity.  I'm getting very good results so far.
> ---
>> You can also use the QueryPeformanceCounter and related functions for better 
>> precision.
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.



-- 

Chris Albertson
Redondo Beach, California
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-17 Thread David J Taylor

From: Mark Sims

I just added some code to Lady Heather to record and plot the time that the 
timing message arrived from the receiver (well, actually the time that the 
screen update routine was called,  maybe a few microseconds difference). 
I am using my existing GetMsec() routine which on Windoze actually has 
around a 16 msec granularity.  The Linux version uses the Linux nanosecond 
clock (divided down to msec resolution).  I just started testing it on a 
Ublox 8M in NMEA and binary message mode...  really surprising results to 
come shortly...

___

Mark,

Thanks for those updates.

For Windows lower than 8, turn on the high-resolution timer and you can get 
millisecond level (0.977 ms IIRC).  You can also use the 
QueryPeformanceCounter and related functions for better precision.


 https://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx

However for current Windows (8, 8.1 10) the situation is much better as you 
can get 100 ns precision using the new GetSystemTimePreciseAsFileTime call:


 https://msdn.microsoft.com/en-us/library/windows/desktop/hh706895(v=vs.85).aspx

I wrote up a little more here:

 http://www.satsignal.eu/ntp/TSCtime.html

based on:

 http://www.lochan.org/2005/keith-cl/useful/win32time.html

I look forward to your results.

Cheers,
David
--
SatSignal Software - Quality software written to your requirements
Web: http://www.satsignal.eu
Email: david-tay...@blueyonder.co.uk
Twitter: @gm8arv 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] GPS message jitter (was GPS for Nixie Clock)

2016-07-16 Thread Tom Van Baak
Hi Mark,

As one example of what you'll see, scroll down to the NMEA Latency/Jitter plot 
at:
http://leapsecond.com/pages/MG1613S/

In that 900 sample (15 minutes) run, the mean latency was 350.2 ms with a 
standard deviation (jitter) of 10.7 ms. I'll dig out some other data I may 
have. It will be quite different depending on receiver make/model.

For this plot I made TIC measurements between the leading edge of the 1PPS and 
the leading edge of the start bit of the first byte of the first NMEA sentence.

BTW, on Windows use QueryPerformanceCounter if you want granularity-free 
millisecond or even microsecond time interval resolution.

/tvb

- Original Message - 
From: "Mark Sims" 
To: 
Sent: Saturday, July 16, 2016 12:13 PM
Subject: [time-nuts] GPS message jitter (was GPS for Nixie Clock)


>I just added some code to Lady Heather to record and plot the time that the 
>timing message arrived from the receiver (well, actually the time that the 
>screen update routine was called,  maybe a few microseconds difference).I 
>am using my existing GetMsec() routine which on Windoze actually has around a 
>16 msec granularity.  The Linux version uses the Linux nanosecond clock 
>(divided down to msec resolution).  I just started testing it on a Ublox 8M in 
>NMEA and binary message mode...  really surprising results to come shortly...
> 
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.