On 9/23/11 1:45 PM, Poul-Henning Kamp wrote:
In message<4e7cdeb0.8070...@earthlink.net>, Jim Lux writes:

Actually, the really annoying one is where I have a good clock that's
stable, but I need to keep adjusting "time" to match someone else's
terrible clock.  Most clock disciplining/time propagation models assume
your bad clock is following a better clock.

That is exactly what happens when you put an OCXO or Rb in a computer
and run NTPD against a server across the internet :-)


I still have a hard time drawing a boundary about this next level up,
and maybe I'm misunderstanding you, so let me think out loud for
a moment:


Its pretty obvious that you can build a suitably general mathematical
model that will cover anything you can expect to encounter:

A polynomium of a dozen degrees will catch any PLL-like regulation
pretty well, add a fourier series for periodic terms like your
temperature variations and finally chop it into segments to
correctly deal with discontinuities from power failuers or
upsets.

But isn't that so general that it becomes meaningless ?


Maybe, but not necessarily, and if you were to establish such a general form for converting timecount (clock) into "time" what would be a reasonable number of terms to limit it to?

Maybe I can find my way through by considering the discontinuity problem. At some level, one likes to have time be "continuous" (i.e. some order derivative = 0). You'd also like to be able to compare two sets of data (derived from different clocks, but converted to a common time scale), so the clock to time transformation should make that possible at some level of granularity and continuity.

Likewise, you'd like to be able to schedule an event to occur at two places (with different underlying clocks) at some time in the future, so the transformation from time to "clock value when X needs to happen" should be possible. Again, discontinuities would raise problems (the daylight saving time problem of having two 145AMs or no 230AM)

So, it's not necessarily that one needs an arbitrary number of polynomial terms, but maybe a way to seamlessly blend segments with smaller numbers of terms (the cubic spline idea), and then some consistent method for describing it.





Determining two or three dozen Finagle constants doesn't sound like
anything close to "real-time" to me, and it all hinges crucially
on the mathematical model being expressive enough.

Exactly.. I think the uncertainty in those high order terms might be meaningless.

But maybe one could think in terms of a hierarchical scheme..

A high level measurer/predictor that cranks out the "current low order polynomial model" based on whatever it decides to use (e.g. temperature, phase of the moon, rainfall)

The scope of the "time problem" is in defining how one converts from raw clock (counts of an oscillator) to time (with a scale and epoch), but not how one might come up with the parameters for that conversion. (that's in the "clock modeling" domain)

Likewise.. a synchronization scheme (e.g. NTP) is really an estimation problem, based on measurements and observations, and producing the "transformation". The mechanics of how one comes up with the parameters is out of scope for the architecture, just that such a function can exist.





Something like the SOHO unthaw would be a really tough
challenge to model I think.

The opposite approach is accept that clock-modelling is not the
standardized operation, but representing the data to feed into the
clock-modelling software should be a standard format, to facilitet
model reuse.

Exactly. The data feeding into the clock modeling process should be "raw clock" and time (e.g. if you get time hacks from an outside source, to match them against your clock, you either need to convert clock into the external time scale, or convert the external time scale into your internal clock scale).

And (as you indicated below) a whole raft of other speculative inputs to the clock modeling (out of scope for the architecture..)

The output would be some revised description of how to convert "clock" into "time"




Some of that data is pretty obvious:
        Time series of clock offset estimates:
                When
                Which other clock
                Uncertainty of other clock
                Measured Offset
                Uncertainty of Measured Offset
        Craft orbital params
                XYZT, model gets to figure out what is nearby ?
                or
                Parametric model (in orbit about, ascending node...)

And then it gets nasty:
        Vehicle Thermal balance model
                a function of:
                Vehicle configuation
                Vehicle orientation
                Nearby objects (sun, planets, moon)
                Wavelength

        Clock model:
                a function of:
                vehicle temperature,
                bus voltage
                gravity
                magnetic fields from craft
                vibration (micrometeorites, think: Hipparcos)
                clock age
                random clock internal events

And the list probably goes on and on, until we come to individual
component failure effects.


I see most of those(if not all) being outside the scope of the time architecture. They're more in the domain of the clock modeler, more than the consumer of "time".



Missing in this picture is the organizational boundaries:
The mission data comes from one place, and the clock model
or clock parameters are probably delivered by the manufacturer
of the specific device?

That's on top of everything else, of course. But say we wanted clock manufacturer (or equipment mfr who's embedding the clock) to expose a programming interface that lets me get their current estimated parameters (which could either be a fixed constant, or maybe some fancy calibrated scheme). How should that API return the data.

The mission folks are fairly standardized: they work in "time" (usually seconds since some reference time: mission elapsed time or TAI or UTC or something like that)

Right now, there's a whole "time correlation" process which presumes very stupid spacecraft that report a raw timecount, and all the conversions to "time" are done on the ground, in a way that is unique for each spacecraft or mission (yes, there are commonalities, because hardware designs don't change all that rapidly). Things are referenced to a particular bit time in the telecommand or telemetry messages (with prior measurements on the ground of the latency from "RF signal received at antenna" to "spacecraft computer latches internal time counter".

The good folks at Deep Space Network then figure out the difference between "time signal left spacecraft" and "earth received time" (or vice versa.. Earth Transmitted Time and "time signal arrived at spacecraft")

From that, we can schedule an event to occur at some specific time (e.g. you want to do your trajectory correction maneuver to start the entry descent and landing process at Mars at a precise time, and the spacecraft, historically, just does that by scheduling it in "spacecraft RTC value". That is, all the hard stuff is done on the ground, and the spacecraft is pretty stupid.

But as we move towards constellations of spacecraft with LONG light time to earth, that whole time correlation process needs to be done autonomously. So the process of converting "local count" to "time in some universally agreed scale" and back has to be done locally.

And since I build radios (and software for radios), and radios are where we get all that timing information, it makes sense to incorporate it into the radio's infrastructure.

A similar model already exists for GPS. A GPS receiver has to turn "internal clock" along with "received signal phase/time" into "GPS time". GPS is a substantially more complex problem (i.e. the fairly complex time computation in GPS has to roll in the platform dynamics with the GPS satellite dynamics, with the pseudo range measurements, etc.)

I'm looking for a simpler thing: A simple consistent way to describe the transformation from timecount (clock) to something like UTC.






How many of these parameters you need to include will of course
depend on the exact vehicle and mission requirements.  There is a
heck of a difference between a commercial geo-stationary comms
satelite and Gravity Probe B and Gaia.


Exactly.
So what I'd like to do is say something along the lines of:

Conversion from clock to time shall done with a N order polynomial with a defined start and stop time when that conversion is effective.
The coefficients of the polynomial shall be specified as TBD format.
A (platform specific) process shall define the values of the polynomial terms and the start/stop times. The platform provider shall define N, which shall not be less than TBD-Min or greater than TBD-Max. All platforms shall accept time conversion specifications with a number of terms between TBD-Min and TBD-Max







One can always say "put it in XML and hope for the best" but
that's not much of a standard, is it ?

A pox on "put it in XML".. As far as I'm concerned that's no better than saying "put it in an ASCII text file". XML just specifies the syntax for a parameter value pair, rather than saying "Parameter1=value" on each line in a file. And yes, XML does provide a convenient way to do hierarchies.




                


_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to