Thanks Teal,

>Not actually true; the degree of difference between groups/cases/whatever that 
>you'll need to to get a statistically significant result (be it for p=.05 or 
>p=.01 or whatever) will depend on the sample size, and on the characteristics 
>of the sample and the population you're drawing the sample from. There is in 
>fact a whole sub-topic of stats that is about working out what size sample you 
>need for a given situation in order to be able to plausibly see any real 
>differences between groups, should there be a real difference to be found.

When comparing accident rates, you are not sampling per se, but estimating the 
occurrence of an event in a selected population (i.e. descriptive statistics).
Also for 'rare' events such as accidents, estimating the mean rate and standard 
deviations from year to year can show such variability that statistical testing 
becomes problematic.
OK, you could sample the different accident rates annually for say 10 years (or 
look at historical data) to estimate an average and median accident rate, as 
well as estimate a standard deviation and then do comparative statistical 
testing.

Even working which out which metric to use can be controversial, i.e. accident 
rate per km? per hours flown? per number of flights?

You are talking about working out the power of a test. Working out statistical 
power is, as you say, a whole field of applied maths that keeps statisticians 
employed! ;-)

Accident rates may be approximated by Poisson distributions because of their 
'rare' nature.
A binomial distribution may also be used, however a Poisson distribution 
becomes a limiting case of the binomial distribution.
One could also use less assumptions about the underlying distribution of 
accident rates and use non-parametric or more robust measures.



_______________________________________________
Aus-soaring mailing list
[email protected]
http://lists.base64.com.au/listinfo/aus-soaring

Reply via email to