May I suggest to turn the 24 hours reset period into a parameter?
On Fri, Jan 13, 2017 at 8:45 PM, Tom Van Baak wrote:
> Mark, Ole,
>
> Yes, averaging can both enhance precision but also destroy information. In
> many cases too much data is a bad thing. The solution is to
I think their advice was to limit the ADEV calculation for some tau to 300
bins. The standard error on estimating the standard deviation is ~ +- 5%
for 200 samples. So loosely speaking in the neighborhood of 100-300 bins
the resulting adev will have an rms uncertainty of roughly 5%. So limiting
Mark, Ole,
Yes, averaging can both enhance precision but also destroy information. In many
cases too much data is a bad thing. The solution is to add another dimension to
the plot. Stable32 does this with DAVAR (dynamic Allan variance). TimeLab has a
multi- "trace" feature. Both of these break
Hi
That’s the way I read what they are saying. More or less: Keep the number of
samples above
100, but below 300.
Bob
> On Jan 13, 2017, at 12:30 PM, Ole Petter Rønningen
> wrote:
>
> That IS interesting.. It reads to me that the advice is to keep a "moving 300
> pt
That IS interesting.. It reads to me that the advice is to keep a "moving 300
pt ADEV" when continously monitoring a (pair of) frequency source in e.g a VLBI
site - the reason for limiting it to 300 pts being that much more than that is
likely to average out potential issues..
Does that make
You are certainly justified to be cautious of only using an xDEV for state
of health. I don't know what GPS does for example to mark SV's as healthy
or not healthy.
On Fri, Jan 13, 2017 at 1:11 PM, Bob Camp wrote:
> Hi
>
> I do agree with their point that systematics will get
Hi
I do agree with their point that systematics will get buried in giant data
blocks.
What I’m not quite as sure of is the utility of even 300 sample blocks to spot
systematic issues.
Bob
> On Jan 13, 2017, at 1:08 PM, Scott Stobbe wrote:
>
> I think you might be
I think you might be overthinking their point, that if you plan to use an
xDEV as a measure for state of health, don't use years worth of data.
Otherwise it could be days before the xDEV visually changes.
On Fri, Jan 13, 2017 at 11:04 AM, Bob Camp wrote:
> Hi
>
> There’s an
Hi
There’s an interesting comment buried down in that paper about limiting ADEV to
< 300 samples per point. Their objective is apparently to better highlight
“systematic
errors”. I certainly agree that big datasets will swamp this sort of thing. I’m
not quite
sure that I’d recommend ADEV to