On 2/9/12 8:42 AM, Magnus Danielson wrote:
On 02/09/2012 04:10 PM, Jim Lux wrote:
Interesting point you make here. The rising ADEV at 100-1000 second-ish
tau in a system that should be better is a classic sign (at least around
here) that temperature effects are showing up.
I regularly see the building AC at 900-1000 s for instance.
However, how could one remove that effect from the raw data? And isn't
the measurement of the "system", which includes the environmental
effects.
ADEV and friends is there to handle random sources, where as this is a
systematic source.
But I would contend that unless you can externally measure that
disturbance and remove it, it's a fundamental part of the frequency
variation. Since you don't have control over it, or necessarily have
accurate information about it, it's not something that could be
"calibrated out".
Say, for instance, I had an oscillator that was mounted in such a way
that it rotated slowly once every hour. There would be a periodic
variation in frequency, which could be accurately modeled and removed,
so that wouldn't necessarily count in ADEV.
But temperature is random (although band limited), and so, for
measurements over the 1000 second time scale, it's impossible to tell if
the change was due to temperature or due to the underlying oscillator.
I suppose you could run your widget in a temperature controlled chamber,
get those numbers. Then run it in a less controlled benchtop
environment, and get those numbers, and claim that the difference is
environmental.
But at some point, what you're interested is the performance of the
system in the environment in which it will be used. If you need good
ADEV performance at the 1000 second tau, then you need an oven, a vacuum
bottle, or a better design that's less environment sensitive.
You could also build active systematic effect predictors to lower this
systematic effect.
Yes. That's basically no different than controlling the environment.
If the transfer function of environment to output is well known, and you
know the environment, you could legitimately "remove" it from the
measured output.
By doing proper logging of key environmental effects, build a model for
how the dominant variations will systematically affect the signal and
then remove that from the measurements you get a better random jitter
measurement.
Ah, but there's the rub. Can you actually do that with acceptable
performance?
I know that you can measure the temperature of a crystal and fairly
accurately calibrate out the frequency change due to temperature (to the
point where frequency can be used to measure temperature). So now, your
ADEV on the "calibrated" output would depend on the temperature
measurement accuracy. Essentially what you have done is reduced the
tempco of the system.
Frequency drift of an oscillator is one such systematic effect. If it
where linear, processing it with ADEV would cause a sqrt(2) scale error.
Also, it would not give you a good prediction since usually you follow a
A*ln(B*t+1) curve which isn't matching the requirement, so you will only
get first degree compensation of that with HDEV style measures.
Yes, and I think that for variations that are easily and *accurately*
modeled this is appropriate, however, your next sentence says it all.
Temperature variations is tricky to say the least.
When you have random and systematic effects, separate them and estimate
them separately and then build a combined prediction from these models.
Random jitter and deterministic jitter are two such aspects. Same
applies at longer taus as well.
Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.