Hi

Thanks for the link Rick.  I essentially did the sampling distribution for my 
example below using spss and drawing 25,000 samples.  That is where the ~20 
percent significant figure came from.  But what the site does not do is 
aggregate the ps across however many samples are used (25 in my hypothetical 
example).  Even though a low percentage of the individual tests might be 
significant, the aggregate p can be highly significant using Fisher's procedure 
or some other approach, as in my simulation and as in the single large sample 
test.

Mike P correctly points out how a simulation differs from reality, but perhaps 
misses my point.  Imagine, for example, that we are interested in fmri results 
for some rare condition versus the general population.  I don't know what fmri 
research costs in the US or other countries, but it can be very expensive in 
Canada.  Might a researcher be able to manage 10 subjects, but not 90 or 250?  
Or if the condition is particularly rare, how long would it take to get 10, 90, 
250, or whatever number of participants?  For me, I would like to see a venue 
for multiple researchers who are only able to manage 10 participants because of 
$ and/or  time constraints to publish their results in a way that would later 
allow these results to be aggregated with other similarly restricted studies.  
There are, however, dangers (e.g., exaggerated reports in media), as I noted.

Take care
Jim


James M. Clark
Professor & Chair of Psychology
[email protected]
Room 4L41A
204-786-9757
204-774-4134 Fax
Dept of Psychology, U of Winnipeg
515 Portage Ave, Winnipeg, MB
R3B 0R4  CANADA


>>> "[email protected]" <[email protected]> 11-Apr-13 5:45 PM >>>
This interactive calculator might be useful for determining the percentage of 
times a significant result would occur with repeated sampling of the same 
population vs. one huge single sample. It allows you to draw 5000 samples of 
size ten and compare it to one sample of size 50000.

http://onlinestatbook.com/stat_sim/repeated_measures/index.html 

Rick

Dr. Rick Froman, Chair
Division of Humanities and Social Sciences 
Professor of Psychology 
Box 3519
John Brown University 
2000 W. University Siloam Springs, AR  72761 
[email protected] 
(479) 524-7295
http://bit.ly/DrFroman 


-----Original Message-----
From: Jim Clark [mailto:[email protected]] 
Sent: Thursday, April 11, 2013 3:33 PM
To: Teaching in the Psychological Sciences (TIPS)
Subject: Re: [tips] Why Neuroscience Research Sucks

Hi

I wondered what is the difference between x replications of y observations each 
versus a single study of x*y observations.  Seems logically like they should 
produce the equivalent statistical results.  So I generated 25 samples of 10 
observations from population with mu = 53 and sigma = 10 and tested each sample 
against the null that mu = 50.  About 20% of ts were significant (i.e., low 
power?).  I used Fisher's method to combine p values and the result was p = 
.000122, highly significant.  There are other ways to combine p values that 
produce lower aggregate p values than Fisher's method, but I haven't tried to 
program them yet.

Then I simply treated the 250 observations as a single sample, which produced a 
p value of .000021, much lower than the Fisher's (but of unknown relationship 
to other methods of aggregating ps).

Qualitatively then, a collection of low power studies produces a significant 
result, as does a high power test on exactly the same data.  And logically I'm 
not able to see a substantive difference between the two scenarios.  So perhaps 
multiple modest replications do provide an alternative to insisting on 
sufficient power (expensive?) in individual studies, although the danger would 
be inappropriate or premature conclusions from the early studies or failure to 
carry out and/or publish replications?

Take care
Jim


James M. Clark
Professor & Chair of Psychology
[email protected] 
Room 4L41A
204-786-9757
204-774-4134 Fax
Dept of Psychology, U of Winnipeg
515 Portage Ave, Winnipeg, MB
R3B 0R4  CANADA


>>> Michael Palij <[email protected]> 10-Apr-13 7:20 AM >>>
A paper published in Nature Reviews Neuroscience reports a meta-analysis of 
neuroscience research studies and, in keeping with old problems with 
experimental designs used by people who perhaps don't know what they're doing 
(e.g., failing to appreciate the role of statistical power), report that they 
find (a) low levels of statistical power (around .20),
(b) exaggerated effect sizes, and (c) lack or reproducibility.
But don't take my word for it, here is a link to research article:
http://www.nature.com/nrn/journal/vaop/ncurrent/full/nrn3475.html 

NOTE: you'll need to use you institution's library to access the article.

There are popular media articles that focus on this article which may be useful 
in classes such as critical thinking and maybe even neuroscience; see:
http://www.guardian.co.uk/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters
 

Jack Cohen pointed out some of the problems back in his 1962 review as well as 
updated them in subsequent publications; see:
http://classes.deonandan.com/hss4303/2010/cohen%201992%20sample%20size.pdf 

Of course, this is problem of researcher education, the politics of funding 
research and publishing, and perhaps sociological factors, such trying to 
appear more "scientific" -- focusing on brain is after all more "scientific" 
than focusing on just behavior or the mind.

-Mike Palij
New York University
[email protected] 

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=24913
 
or send a blank email to 
leave-24913-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu 


---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13039.37a56d458b5e856d05bcfb3322db5f8a&n=T&l=tips&o=24951
 
or send a blank email to 
leave-24951-13039.37a56d458b5e856d05bcfb3322db5...@fsulist.frostburg.edu 

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=24954
 
or send a blank email to 
leave-24954-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=24958
or send a blank email to 
leave-24958-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

<<attachment: Jim_Clark.vcf>>

Reply via email to