On 10/5/13 8:30 AM, Magnus Danielson wrote:
Hi Jim,

On 10/05/2013 04:52 PM, Jim Lux wrote:
On 10/5/13 3:44 AM, Magnus Danielson wrote:
And I want to do it quickly on a slow processor.
... and a simple high-pass filter won't do it for you?

I suppose so..I'll have to fool with that.
A high pass filter would definitely take out the DC and linear terms,
but I'm not so sure about the exponential transient.

(OTOH, as I look at more of the data, maybe I just lop off the first
chunk of data with some empirical knowledge)
The transient will make it through the filter, and if you can waste the
first bunch, a simple one or two-pole high-pass filter will cost you
brilliantly little processing to reduce the linear trend to essentially
nothing.
Can you store all samples in memory and iterate over them?

Yes.. it can be acausal.
OK. Good.
It's sort of like looking at frequency or ADEV data, removing the
aging, and trying to see the fluctuations due to room temperature
variation.

Ah. Then after the initial transient, all you want to do is estimate the
drift term and remove it from your data. That's not too hard to do with
a simple least square algorithm.

yes..

 I've also made variants of ADEV
processing that accumulates values such that the linear drift can be
taken out of the ADEV without re-iterating, but in that case the
Hadamard does it too. When doing a least square you get frequency and
drift parameters and can then get a reduced sample-set for ADEV and
friends to chew on.

You only need to estimate the exponential decay if your samples are
precious and you need to get those early samples.


Exactly...

It's a fairly simple model.

I'll have to look at a bunch of data sets and decide if the exponential part is something I can just chop off. In the previous system, the time in the exponential part was a significant fraction of the total data set, but in this one it seems to be a lot faster. Cursory estimation by eye makes it look like out of the 2000 samples in the plot, the transient has died out by 200 samples. Since the typical data epoch is 10s of thousands of samples, losing the first 200 is no big deal.



That's why I was loathe to leap in and turn the full power of Matlab nonlinear fits into it.


_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to