On 10/03/2016 5:05 PM, Texler, Michael wrote:
It is always difficult to compare accidents rates for 'rare' events due to the 
wide 95% confidence intervals.
http://www.evanmiller.org/ab-testing/poisson-means.html

As mentioned by others, there often needs to be an order of magnitude 
difference (i.e. a 10 tenfold increase or decrease of an accident rate) to 
demonstrate statistical significance at the 95% level (this also means that 
there is a 5% chance of accepting a chance variation as being significant).

Not actually true; the degree of difference between groups/cases/whatever that you'll need to to get a statistically significant result (be it for p=.05 or p=.01 or whatever) will depend on the sample size, and on the characteristics of the sample and the population you're drawing the sample from. There is in fact a whole sub-topic of stats that is about working out what size sample you need for a given situation in order to be able to plausibly see any real differences between groups, should there be a real difference to be found.


It is not lies and damned statistics, but a 5% chance that the result is in 
error (using commonly accepted practice).

See:
https://en.wikipedia.org/wiki/Poisson_distribution

Comments from statisticians welcome.

I'm an experimental psychologist - not an actual full-time statistician, but I do play one on TV (if you know what I mean).


Teal
_______________________________________________
Aus-soaring mailing list
[email protected]
http://lists.base64.com.au/listinfo/aus-soaring

Reply via email to