Dear Don,

There are no simple answers to this question.  Firstly, always be totally
transparent about the set of questions/contrasts you're investigating when
you write up your results. But, when it comes to decide over what set of
results to control multiple testing, I don't think you need to naively
correct for every question in a paper.  For example, if you look at sex
differences, and then you look at age effects, I won't correct as there is
a literature on sex differences and a separate one on ageing.  But, if
there is a natural set of questions that you are implicit or explicitly
looking at together, then you should correct.  For example if you did a ICA
dual regression to get (say) 8 spatial maps of the main RSNs, and then test
for sex differences over those 8 and report all of them,  you probalby
should do a correction for those 8 comparisons.

About different labs, if each lab is working independently, they're surely
going to make slightly different choices about the analysis, and then it
will be a confidence building result if they all get the same/similar
results.  But, if you're considering the thought experiment where 250 labs
each publish one paper on 1 variable in the 250+ behavioral/demographic
meaures in the HCP data, I would say that requires a 'science-wide'
correction applied by the reader of the 250 papers.

You can use Bonferroni, changing a 0.05 threshold to 0.05/8=0.00625, but
alternatively you can use PALM, which can use a sharper (less conservative)
correction using "Tippets method" to correct for the 8 tests.

Hope this helps.

-Tom


On Tue, Sep 13, 2016 at 2:00 PM, Krieger, Donald N. <krieg...@upmc.edu>
wrote:

> Dear List,
>
>
>
> When a lab analyzes their own data, they control for the degradation in
> confidence due to multiple comparisons.
>
> But how does that work when you have many labs analyzing the same data?
>
>
>
> At the one end, several labs could do exactly the same analysis and get
> the same results.
>
> At the other end, several labs could run entirely different tests, each
> controlling for the comparisons they do, and reporting their results with
> the confidence levels they compute under the assumption that those are the
> only tests.
>
> But since the total number of tests under these circumstances is the sum
> for all the labs, isn’t that the number of comparisons for which each lab
> must control?
>
>
>
> I hope I’ve expressed this clearly enough.
>
> I admit to being confused by the question.
>
> What do you think?
>
>
>
> Best - Don
>
>
>
> _______________________________________________
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>



-- 
__________________________________________________________
Thomas Nichols, PhD
Professor, Head of Neuroimaging Statistics
Department of Statistics & Warwick Manufacturing Group
University of Warwick, Coventry  CV4 7AL, United Kingdom

Web: http://warwick.ac.uk/tenichols
Email: t.e.nich...@warwick.ac.uk
Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
Fx,  +44 24 7652 4532

_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to