On 02/10/2012 12:51 AM, Jim Lux wrote:
On 2/9/12 8:42 AM, Magnus Danielson wrote:
On 02/09/2012 04:10 PM, Jim Lux wrote:


Interesting point you make here. The rising ADEV at 100-1000 second-ish
tau in a system that should be better is a classic sign (at least around
here) that temperature effects are showing up.

I regularly see the building AC at 900-1000 s for instance.

However, how could one remove that effect from the raw data? And isn't
the measurement of the "system", which includes the environmental
effects.

ADEV and friends is there to handle random sources, where as this is a
systematic source.

But I would contend that unless you can externally measure that
disturbance and remove it, it's a fundamental part of the frequency
variation. Since you don't have control over it, or necessarily have
accurate information about it, it's not something that could be
"calibrated out".

It's a huge difference what comes in the device and what environment we put it in. A temperature dependence, is a systematic effect on the device.

Say, for instance, I had an oscillator that was mounted in such a way
that it rotated slowly once every hour. There would be a periodic
variation in frequency, which could be accurately modeled and removed,
so that wouldn't necessarily count in ADEV.

That is indeed a systematic effect in that environment of that device.
Once you have characterized the effect, you can then predict the behaviour to some precision if you now turns it quicker, more often or not at all. The random effects would then be re-introduced on that predicted systematic effect and you would see the aggregate effect.

But temperature is random (although band limited), and so, for
measurements over the 1000 second time scale, it's impossible to tell if
the change was due to temperature or due to the underlying oscillator.

Temperature is not necessary random, I often find systematic albeit somewhat unregular patterns in my temperature readings and also frequency or phase plots when I measure longer times on the less than ideal oscillators. They dominate over any noise contributions, and for those taus I care less about the noise and more about the systematic effects. This is one of the reasons we use MTIE in addition to TDEV for telecommunication measurements, it makes good engineering sense to use it instead.

A couple of long term systematic effects on temperature relates to the sun. The dinural wander, i.e. the 24 h period wander is due to the rotation of earth and the effect of the sun. This is not a random, but rather a very regular systematic effect.

Another similar effect is the season effect, over a year the temperature shifts along with the earths axis orientation in relationship to the sun. This is also a systematic effect.

As you put equipment into a building or other temperature controlled environment, the bang-bang regulation method is often seen, and the cycling of temperature up and down is not very steady but still regular enough and again a systematic effect on the oscillator.

If temperature where random, we would see -80 C and +100 C more often in our back yard than we do.

To some degree we can control temperature, we can predict temperature and deal with it. We can handle it as an engineering concept and do steering loops etc. It's pretty systematic to me.

I suppose you could run your widget in a temperature controlled chamber,
get those numbers. Then run it in a less controlled benchtop
environment, and get those numbers, and claim that the difference is
environmental.

But at some point, what you're interested is the performance of the
system in the environment in which it will be used. If you need good
ADEV performance at the 1000 second tau, then you need an oven, a vacuum
bottle, or a better design that's less environment sensitive.

You could also build active systematic effect predictors to lower this
systematic effect.

Yes. That's basically no different than controlling the environment. If
the transfer function of environment to output is well known, and you
know the environment, you could legitimately "remove" it from the
measured output.

Model wise it's not a huge difference, but practically it may be.


By doing proper logging of key environmental effects, build a model for
how the dominant variations will systematically affect the signal and
then remove that from the measurements you get a better random jitter
measurement.

Ah, but there's the rub. Can you actually do that with acceptable
performance?

I know that you can measure the temperature of a crystal and fairly
accurately calibrate out the frequency change due to temperature (to the
point where frequency can be used to measure temperature). So now, your
ADEV on the "calibrated" output would depend on the temperature
measurement accuracy. Essentially what you have done is reduced the
tempco of the system.

You never get your model and measurement perfect. Not even your ADEV. You may get good enough confidence for what you need to do.

I trust that you ask for a certain temperature-spec on your oscillators, that you is expected to make the environment suitable enough etc.

Really, what I say is that trying to lump everything into a single number is just not a very useful thing when doing engineering, especially with these devices we use called crystal oscillators. There are many subtleties which one needs to learn. Tossing everything into a single ADEV diagram isn't going to help the learning curve.

Don't get me wrong, ADEV is a great tool, it's just not a great tool for all the effects we see for an oscillator.


Frequency drift of an oscillator is one such systematic effect. If it
where linear, processing it with ADEV would cause a sqrt(2) scale error.
Also, it would not give you a good prediction since usually you follow a
A*ln(B*t+1) curve which isn't matching the requirement, so you will only
get first degree compensation of that with HDEV style measures.

Yes, and I think that for variations that are easily and *accurately*
modeled this is appropriate, however, your next sentence says it all.

Temperature variations is tricky to say the least.

When you have random and systematic effects, separate them and estimate
them separately and then build a combined prediction from these models.

Random jitter and deterministic jitter are two such aspects. Same
applies at longer taus as well.

True, but the underlying story is really that you seems to put too much confidence in ADEV and ADEV alone. ADEV is a great tool, especially to handle some of the 1/f noise variants which normal engineering RMS can't handle. It's a speciallized branch to meet special needs. But it remains only one of several tools. Learn what the tool is good and and what it is not so good and, then add another tool with another set of strengths and weaknesses. Add then tools until you get a good enough picture of what the hell the device does so you can make meaningful predictions about it's performance in the environments you are about to put it in, so your end task achieves it's goal, often on a time and budget constraint, and often with the device you didn't want to use for this task.

Cheers,
Magnus

_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to