Date: Thu, 27 APR 2000 17:17:05 -0400
From: Rich Ulrich <[EMAIL PROTECTED]>

> On Wed, 26 Apr 2000 20:43:02 -0400, Greg Heath 
> <[EMAIL PROTECTED]> wrote:
> 
> > Can you help or lead me to the appropriate reference?
> > 
> > I have 526 radar measurements evenly sampled over 26.25 sec (i.e., pulse 
> > repetition frequency = 20 points per second).
> > 
> > mean  =  0.0
> > stdv  =  1.2
> > t0    =  1    sec (1/e decorrelation time from the autocorrelation function)
> > 
> > I want to test the null hypothesis that these correlated measurements 
> > could have been drawn from a zero-mean Gaussian distribution. 
> 
> The usual, simple  alternative would say, mean not-zero.  Since your
> mean is observed to be zero, I guess you can accept that simple null.

I should have mentioned that these are residuals from a complicated 
detrending process done by others that cause the mean to be zero. 
I have no reason to believe that the detrending process was faulty.

> > However I don't believe I have enough independent measurements.
> > 
> > Will bootstrapping help? i.e., 
>  < snip, example that is probably irrelevant >
> 
> Well, bootstrapping is a real problem when you don't know what to do
> with that serial correlation.  And you don't, do you?

You hit the nail on the head.

> So, why do you care about normality?  

I'm trying to validate a simulation based on assumptions and 
approximations with respect to a physics based phenomenon which is not 
completely understood. A measurement program has been initiated but it 
will be years (maybe never) before we have enough data to support 
decisions that we have to make in a near (months) time frame. So, we 
have to rely on a few measurements and volumes of simulations.

My simulation currently assumes that the residuals are Gaussian. If 
this is a bad assumption, I need to know ASAP to prevent higher level 
decision makers from making some very costly mistakes.

> If you toss the numbers up and
> do a test as if they were independent, what do you get? -- that gives
> you one limit.  That is, if they look Normal *despite*  the odd values
> that you might have because of serial correlation, then you can accept
> this set as robustly Normal, in regards to whatever you are testing.

Agreed, but results on three sets of data with different variances but 
similar decorrelation times indicate that the K-S test probabilities vary 
considerably. In a couple of years I will have enough measurements 
to make a more definitive pronouncement. However, at this point I'm 
just worried about whether or not the current K-S test probabilities 
(Numerical Recipes algorithm) are valid. 

In addition to using correlated measurements, I'm using the variance 
estimate. N.R. doesn't say any thing about how to reduce the degrees of 
freedom when you estimate the parameters of the distribution function.

Thanks,

Greg

Gregory E. Heath     [EMAIL PROTECTED]      The views expressed here are
M.I.T. Lincoln Lab   (781) 981-2815        not necessarily shared by
Lexington, MA        (781) 981-0908(FAX)   M.I.T./LL or its sponsors
02420-9185, USA


===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to