Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-05 Thread Stephen Smith
Hi - I think your main two choices are whether to run FIX on each 5min run 
separately, or to preprocess and concatenate each pair of scans from each 
session and run FIX for each of the 4 paired datasets.  You could try FIX both 
ways on a few subjects and decide which is working better.

Cheers.



> On 5 Oct 2017, at 22:53, Sang-Young Kim  wrote:
> 
> Dear Experts:
> 
> We have acquired rfMRI dataset with A-P and P-A phase encoding direction and 
> the data was acquired in eight 5-minute runs split across four imaging 
> sessions. We have processed the data using HCP pipelines (e.g., 
> PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and 
> ICA+FIX). So we have results for each run of rfMRI data. 
> 
> I’m just curious about what is recommended way to combine each run of data 
> (e.g., rfMRI_REST1_AP, rfMRI_REST1_PA, …, rfMRI_REST4_AP, rfMRI_REST4_PA). 
> 
> Can we just temporally concatenate each run of data before running ICA+FIX?
> Or can we do group ICA using each data processed with ICA+FIX?
> What is the optimal way to do combining analysis across each run? 
> 
> Any insights would be greatly appreciated. 
> 
> Thanks. 
> 
> Sang-Young Kim
> ***
> Postdoctoral Research Fellow
> Department of Radiology at University of Pittsburgh
> email: sykim...@gmail.com
> ***  
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] netmats prediction of fluid intelligence

2017-10-05 Thread Harms, Michael

In the context of the long resting state runs that we have available, I would 
argue that throwing in additional possible confounds is the appropriate thing 
to do.  Are you suggesting that sex, age, age^2, sex*age, sex*age^2, brain 
size, head size, and average motion shouldn’t all be included?

Regardless, r = 0.21 (without confounds in the MegaTrawl) is a long way from 
the r = 0.5 prediction in Finn et al.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From:  on behalf of Thomas Yeo 

Date: Thursday, October 5, 2017 at 10:01 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] netmats prediction of fluid intelligence

Certainly one difference is that HCP (i.e., Steve) tends to take the more 
conservative approach of regressing a *lot* of potential confounds, which tends 
to result in a lower prediction values. You can see that without confound 
regression, Steve's prediction is 0.21 versus 0.06.

Regards,
Thomas

On Fri, Oct 6, 2017 at 1:44 AM, Glasser, Matthew 
> wrote:
Perhaps there is an issue related to data clean up or alignment of brain areas 
across subjects.  The Finn study does not appear to have followed the 
recommended approach to either.

Peace,

Matt.

From: 
>
 on behalf of Benjamin Garzon 
>
Date: Thursday, October 5, 2017 at 1:39 PM
To: "hcp-users@humanconnectome.org" 
>
Subject: [HCP-Users] netmats prediction of fluid intelligence

Dear HCP experts,

I'm trying to reconcile the MegaTrawl prediction of fluid intelligence 
(PMAT24_A_CR)

https://db.humanconnectome.org/megatrawl/3T_HCP820_MSMAll_d200_ts2/megatrawl_1/sm203/index.html

(which shows r = 0.06 between predicted and measured scores)

with the Finn 2015 study

https://www.nature.com/neuro/journal/v18/n11/full/nn.4135.html

claiming an r = 0.5 correlation between predicted and measured scores. In the 
article they used a subset of the HCP data (126 subjects), but the measure of 
fluid intelligence is the same one. What can explain the considerable 
difference? As far as I can see the article did not address confounding, but 
even in that case r = 0.21 for MegaTrawl, which is still far from 0.5. And this 
considering that the model used in the article is a much simpler one than the 
MegaTrawl elastic net regressor.

I've been trying to predict fluid intelligence in an independent sample with 
300 subjects and a netmats + confounds model does not perform better than a 
confounds-only model, more in agreement with the MegaTrawl results.

In the Smith 2015 paper

http://www.nature.com/neuro/journal/v18/n11/full/nn.4125.html

the found mode of covariation with the netmats data correlates with fluid 
intelligence with r = 0.38.

Should I conclude from the Megatrawl analysis (as well as from my own) that the 
single measure of fluid intelligence is not reliable enough to be predicted 
based on connectome data, or am I missing something from the Finn paper?

I would be happy to read people 's thoughts about this topic, in view of the 
disparate results in the literature.

Best regards,

Benjamín Garzón, PhD
Department of Neurobiology, Care Sciences and Society
Aging Research Center | 
113
 30 Stockholm | Gävlegatan 
16
benjamin.gar...@ki.se | 
www.ki-su-arc.se
__
Karolinska Institutet – a medical university



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in 

Re: [HCP-Users] netmats prediction of fluid intelligence

2017-10-05 Thread Thomas Yeo
Certainly one difference is that HCP (i.e., Steve) tends to take the more
conservative approach of regressing a *lot* of potential confounds, which
tends to result in a lower prediction values. You can see that without
confound regression, Steve's prediction is 0.21 versus 0.06.

Regards,
Thomas

On Fri, Oct 6, 2017 at 1:44 AM, Glasser, Matthew  wrote:

> Perhaps there is an issue related to data clean up or alignment of brain
> areas across subjects.  The Finn study does not appear to have followed the
> recommended approach to either.
>
> Peace,
>
> Matt.
>
> From:  on behalf of Benjamin
> Garzon 
> Date: Thursday, October 5, 2017 at 1:39 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] netmats prediction of fluid intelligence
>
> Dear HCP experts,
>
> I'm trying to reconcile the MegaTrawl prediction of fluid intelligence
> (PMAT24_A_CR)
>
> https://db.humanconnectome.org/megatrawl/3T_HCP820_
> MSMAll_d200_ts2/megatrawl_1/sm203/index.html
>
> (which shows r = 0.06 between predicted and measured scores)
>
> with the Finn 2015 study
>
> https://www.nature.com/neuro/journal/v18/n11/full/nn.4135.html
>
> claiming an r = 0.5 correlation between predicted and measured scores. In
> the article they used a subset of the HCP data (126 subjects), but the
> measure of fluid intelligence is the same one. What can explain the
> considerable difference? As far as I can see the article did not address
> confounding, but even in that case r = 0.21 for MegaTrawl, which is still
> far from 0.5. And this considering that the model used in the article is a
> much simpler one than the MegaTrawl elastic net regressor.
>
> I've been trying to predict fluid intelligence in an independent sample
> with 300 subjects and a netmats + confounds model does not perform better
> than a confounds-only model, more in agreement with the MegaTrawl results.
>
> In the Smith 2015 paper
>
> http://www.nature.com/neuro/journal/v18/n11/full/nn.4125.html
>
> the found mode of covariation with the netmats data correlates with fluid
> intelligence with r = 0.38.
>
> Should I conclude from the Megatrawl analysis (as well as from my own)
> that the single measure of fluid intelligence is not reliable enough to be
> predicted based on connectome data, or am I missing something from the Finn
> paper?
>
> I would be happy to read people 's thoughts about this topic, in view of
> the disparate results in the literature.
>
> Best regards,
>
> Benjamín Garzón, PhD
> Department of Neurobiology, Care Sciences and Society
> Aging Research Center | 113
> 
>  30 Stockholm | Gävlegatan 16
> 
> benjamin.gar...@ki.se | www.ki-su-arc.se
> 
> __
> Karolinska Institutet – a medical university
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-05 Thread Sang-Young Kim
Dear Experts:

We have acquired rfMRI dataset with A-P and P-A phase encoding direction and 
the data was acquired in eight 5-minute runs split across four imaging 
sessions. We have processed the data using HCP pipelines (e.g., PreFreeSurfer, 
FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and ICA+FIX). So we have 
results for each run of rfMRI data. 

I’m just curious about what is recommended way to combine each run of data 
(e.g., rfMRI_REST1_AP, rfMRI_REST1_PA, …, rfMRI_REST4_AP, rfMRI_REST4_PA). 

Can we just temporally concatenate each run of data before running ICA+FIX?
Or can we do group ICA using each data processed with ICA+FIX?
What is the optimal way to do combining analysis across each run? 

Any insights would be greatly appreciated. 

Thanks. 

Sang-Young Kim
***
Postdoctoral Research Fellow
Department of Radiology at University of Pittsburgh
email: sykim...@gmail.com
***  


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Mean and variance normalization

2017-10-05 Thread Glasser, Matthew
I’ll note that when I do variance normalization, I do it with just the 
unstructured noise map which is available in the HCP release (though it will 
need to be resampled to MSMAll space if you are using the MSMAll data).

Peace,

Matt.

From: 
>
 on behalf of "Harms, Michael" >
Date: Thursday, October 5, 2017 at 4:48 PM
To: hercp >, HUMAN CONNECTOME 
>
Subject: Re: [HCP-Users] Mean and variance normalization


I’m not sure what you are looking for, beyond what is in the FAQ.
For a given voxel/grayordinate/parcel, if
M = mean_over_time
S = std_over_time

and X(t) is your time-series

then demeaning is just: X(t) - M
and variance normalization is: (X(t) - M)/S

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: hercp >
Date: Thursday, October 5, 2017 at 3:39 PM
To: "Harms, Michael" >
Subject: Re: [HCP-Users] Mean and variance normalization

Hi Michael,

Thanks for the input.  Are you aware of any utility that does the demeaning, 
variance normalization and concatenation.  Jenn Elam suggested FAQ #3 on the 
HCP-Users FAQ.  Would this be also your preference.  Is there a reference or 
mathematical definitions for these functions (other than the obvious ones), so 
I can do a mathematical comparison between the two approaches?

Heracles Panagiotides, PhD


From: Harms, Michael
Sent: Thursday, October 05, 2017 12:13 PM
To: Glasser, Matthew ; hercp ; HUMAN CONNECTOME
Subject: Re: [HCP-Users] Mean and variance normalization


Re (2) (expanding on Matt’s response): Demeaning and variance normalizing a 
parcellated timeseries (or equivalently the time series for a single ROI), and 
then concatenating those, is not the same as demeaning and variance normalizing 
the dense time series, concatenating those, and then parcellating.  That’s not 
to say that the former isn’t sensible, but it *is* a different operation.  If 
it was me, before I adopted the former, I’d run some analyses both ways, and 
compare the results to see if there are any appreciable differences.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
>
 on behalf of "Glasser, Matthew" >
Date: Thursday, October 5, 2017 at 1:19 PM
To: hercp >, HUMAN CONNECTOME 
>
Subject: Re: [HCP-Users] Mean and variance normalization


  1.  Yes
  2.  I haven’t tried it on parcellated timeseries, but suspect that would be 
fine too.
Matt.

From: 
>
 on behalf of hercp >
Date: Thursday, October 5, 2017 at 2:12 PM
To: HUMAN CONNECTOME 
>
Subject: [HCP-Users] Mean and variance normalization

I am extracting time series from regions of interest.  Matt Glasser suggested 
that I mean/variance-correct and concatenate the RL and LR phase encoded time 
series.  I still have a couple of questions.

1.  Is the concatenation over time?  If so, doesn’t this introduce temporal 
discontinuity?  Am I understanding the concatenation correctly?

2.  Would the outcome be equivalent whether I do the preprocessing to the 
original rfMRI file  OR to the ROI extracted time series and why?

Thank you in advance for any suggestions you may offer.

Heracles Panagiotides, PhD

Heracles Panagiotides, PhD



From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 2:50 PM
To: hercp
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

>From what I've heard from others who have more experience, I would do the 
>preprocessing and concatenating prior to extracting the ROI vector. The big 
>caveat is that I don't actually do any processing, etc. myself as my expertise 
>is in a different field. So, if you want to be sure you should ask your 
>question on the list and get a real expert 

Re: [HCP-Users] Mean and variance normalization

2017-10-05 Thread Harms, Michael

I’m not sure what you are looking for, beyond what is in the FAQ.
For a given voxel/grayordinate/parcel, if
M = mean_over_time
S = std_over_time

and X(t) is your time-series

then demeaning is just: X(t) - M
and variance normalization is: (X(t) - M)/S

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: hercp 
Date: Thursday, October 5, 2017 at 3:39 PM
To: "Harms, Michael" 
Subject: Re: [HCP-Users] Mean and variance normalization

Hi Michael,

Thanks for the input.  Are you aware of any utility that does the demeaning, 
variance normalization and concatenation.  Jenn Elam suggested FAQ #3 on the 
HCP-Users FAQ.  Would this be also your preference.  Is there a reference or 
mathematical definitions for these functions (other than the obvious ones), so 
I can do a mathematical comparison between the two approaches?

Heracles Panagiotides, PhD


From: Harms, Michael
Sent: Thursday, October 05, 2017 12:13 PM
To: Glasser, Matthew ; hercp ; HUMAN CONNECTOME
Subject: Re: [HCP-Users] Mean and variance normalization


Re (2) (expanding on Matt’s response): Demeaning and variance normalizing a 
parcellated timeseries (or equivalently the time series for a single ROI), and 
then concatenating those, is not the same as demeaning and variance normalizing 
the dense time series, concatenating those, and then parcellating.  That’s not 
to say that the former isn’t sensible, but it *is* a different operation.  If 
it was me, before I adopted the former, I’d run some analyses both ways, and 
compare the results to see if there are any appreciable differences.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From:  on behalf of "Glasser, Matthew" 

Date: Thursday, October 5, 2017 at 1:19 PM
To: hercp , HUMAN CONNECTOME 
Subject: Re: [HCP-Users] Mean and variance normalization


  1.  Yes
  2.  I haven’t tried it on parcellated timeseries, but suspect that would be 
fine too.
Matt.

From:  on behalf of hercp 
Date: Thursday, October 5, 2017 at 2:12 PM
To: HUMAN CONNECTOME 
Subject: [HCP-Users] Mean and variance normalization

I am extracting time series from regions of interest.  Matt Glasser suggested 
that I mean/variance-correct and concatenate the RL and LR phase encoded time 
series.  I still have a couple of questions.

1.  Is the concatenation over time?  If so, doesn’t this introduce temporal 
discontinuity?  Am I understanding the concatenation correctly?

2.  Would the outcome be equivalent whether I do the preprocessing to the 
original rfMRI file  OR to the ROI extracted time series and why?

Thank you in advance for any suggestions you may offer.

Heracles Panagiotides, PhD

Heracles Panagiotides, PhD



From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 2:50 PM
To: hercp
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

From what I've heard from others who have more experience, I would do the 
preprocessing and concatenating prior to extracting the ROI vector. The big 
caveat is that I don't actually do any processing, etc. myself as my expertise 
is in a different field. So, if you want to be sure you should ask your 
question on the list and get a real expert answer there.



Cheers,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: hercp 
Sent: Wednesday, October 4, 2017 4:12:39 PM
To: Elam, Jennifer
Subject: Re: [HCP-Users] Fw: rfMRI data files

Hi Jenn,

Thank you so much the reply and pointing at FAQ #3.

I am wondering if we are talking about the same thing when referring to “time 
series”.   Perhaps, if I tell you what I have done so far, my question will be 
more clear:  I have defined a ROI and extracted a time series from the original 
data file; the time series is a single vector corresponding to the mean lever 
of activity of that ROI .  So, do I apply the mean and variance normalization 
to this vector and then concatenate the 

Re: [HCP-Users] Mean and variance normalization

2017-10-05 Thread Harms, Michael

Re (2) (expanding on Matt’s response): Demeaning and variance normalizing a 
parcellated timeseries (or equivalently the time series for a single ROI), and 
then concatenating those, is not the same as demeaning and variance normalizing 
the dense time series, concatenating those, and then parcellating.  That’s not 
to say that the former isn’t sensible, but it *is* a different operation.  If 
it was me, before I adopted the former, I’d run some analyses both ways, and 
compare the results to see if there are any appreciable differences.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From:  on behalf of "Glasser, Matthew" 

Date: Thursday, October 5, 2017 at 1:19 PM
To: hercp , HUMAN CONNECTOME 
Subject: Re: [HCP-Users] Mean and variance normalization


  1.  Yes
  2.  I haven’t tried it on parcellated timeseries, but suspect that would be 
fine too.
Matt.

From: 
>
 on behalf of hercp >
Date: Thursday, October 5, 2017 at 2:12 PM
To: HUMAN CONNECTOME 
>
Subject: [HCP-Users] Mean and variance normalization

I am extracting time series from regions of interest.  Matt Glasser suggested 
that I mean/variance-correct and concatenate the RL and LR phase encoded time 
series.  I still have a couple of questions.

1.  Is the concatenation over time?  If so, doesn’t this introduce temporal 
discontinuity?  Am I understanding the concatenation correctly?

2.  Would the outcome be equivalent whether I do the preprocessing to the 
original rfMRI file  OR to the ROI extracted time series and why?

Thank you in advance for any suggestions you may offer.

Heracles Panagiotides, PhD

Heracles Panagiotides, PhD


From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 2:50 PM
To: hercp
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

From what I've heard from others who have more experience, I would do the 
preprocessing and concatenating prior to extracting the ROI vector. The big 
caveat is that I don't actually do any processing, etc. myself as my expertise 
is in a different field. So, if you want to be sure you should ask your 
question on the list and get a real expert answer there.



Cheers,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: hercp >
Sent: Wednesday, October 4, 2017 4:12:39 PM
To: Elam, Jennifer
Subject: Re: [HCP-Users] Fw: rfMRI data files

Hi Jenn,

Thank you so much the reply and pointing at FAQ #3.

I am wondering if we are talking about the same thing when referring to “time 
series”.   Perhaps, if I tell you what I have done so far, my question will be 
more clear:  I have defined a ROI and extracted a time series from the original 
data file; the time series is a single vector corresponding to the mean lever 
of activity of that ROI .  So, do I apply the mean and variance normalization 
to this vector and then concatenate the vectors, or do I do all this 
(preprocessing and concatenating) prior to extracting the ROI vector time 
series.   (As a side note, I can do all this vector preprocessing in Matlab.)  
Would these two approaches be equivalent?

Thank you very much for being so helpful,
Heracles Panagiotides, PhD


From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 1:30 PM
To: hercp ; HUMAN CONNECTOME
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

FAQ #3 on the HCP-Users 
FAQ might 
help you do what Matt is suggesting.



Best,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: 
hcp-users-boun...@humanconnectome.org
 
>
 on behalf of hercp >
Sent: Wednesday, October 4, 2017 1:32:38 PM
To: HUMAN CONNECTOME
Subject: [HCP-Users] Fw: rfMRI data files

Does 

Re: [HCP-Users] Mean and variance normalization

2017-10-05 Thread Glasser, Matthew
  1.  Yes
  2.  I haven’t tried it on parcellated timeseries, but suspect that would be 
fine too.

Matt.

From: 
>
 on behalf of hercp >
Date: Thursday, October 5, 2017 at 2:12 PM
To: HUMAN CONNECTOME 
>
Subject: [HCP-Users] Mean and variance normalization

I am extracting time series from regions of interest.  Matt Glasser suggested 
that I mean/variance-correct and concatenate the RL and LR phase encoded time 
series.  I still have a couple of questions.

1.  Is the concatenation over time?  If so, doesn’t this introduce temporal 
discontinuity?  Am I understanding the concatenation correctly?

2.  Would the outcome be equivalent whether I do the preprocessing to the 
original rfMRI file  OR to the ROI extracted time series and why?

Thank you in advance for any suggestions you may offer.

Heracles Panagiotides, PhD


Heracles Panagiotides, PhD



From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 2:50 PM
To: hercp
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

>From what I've heard from others who have more experience, I would do the 
>preprocessing and concatenating prior to extracting the ROI vector. The big 
>caveat is that I don't actually do any processing, etc. myself as my expertise 
>is in a different field. So, if you want to be sure you should ask your 
>question on the list and get a real expert answer there.



Cheers,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: hercp >
Sent: Wednesday, October 4, 2017 4:12:39 PM
To: Elam, Jennifer
Subject: Re: [HCP-Users] Fw: rfMRI data files

Hi Jenn,

Thank you so much the reply and pointing at FAQ #3.

I am wondering if we are talking about the same thing when referring to “time 
series”.   Perhaps, if I tell you what I have done so far, my question will be 
more clear:  I have defined a ROI and extracted a time series from the original 
data file; the time series is a single vector corresponding to the mean lever 
of activity of that ROI .  So, do I apply the mean and variance normalization 
to this vector and then concatenate the vectors, or do I do all this 
(preprocessing and concatenating) prior to extracting the ROI vector time 
series.   (As a side note, I can do all this vector preprocessing in Matlab.)  
Would these two approaches be equivalent?

Thank you very much for being so helpful,
Heracles Panagiotides, PhD



From: Elam, Jennifer
Sent: Wednesday, October 04, 2017 1:30 PM
To: hercp ; HUMAN CONNECTOME
Subject: Re: [HCP-Users] Fw: rfMRI data files


Hi Heracles,

FAQ #3 on the HCP-Users 
FAQ might 
help you do what Matt is suggesting.



Best,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: 
hcp-users-boun...@humanconnectome.org
 
>
 on behalf of hercp >
Sent: Wednesday, October 4, 2017 1:32:38 PM
To: HUMAN CONNECTOME
Subject: [HCP-Users] Fw: rfMRI data files

Does anyone know how the concatenation (see discussion below) of the ROI 
extracted time series needs to happen?  Do I simply concatenate the time series 
as a temporal sequence, rfMRI_REST2_LR followed by rfMRI_REST2_RL ?

Thanks again for the kind help.
Heracles Panagiotides, PhD



From: hercp
Sent: Tuesday, October 03, 2017 5:15 AM
To: Glasser, Matthew
Subject: Re: [HCP-Users] rfMRI data files

Thanks for the kind reply, Matt.  Let me make sure that I understand the 
process.  I should extract the time series from regions of interest from each 
phase encoded file.  Then I should concatenate the time series demeaning and 
variance normalizing?  Am I way off?  Sorry about the naïve questions, but I am 
a couple of decades behind in MR work.  [Smile]

Thanks,
Heracles Panagiotides, PhD



From: Glasser, Matthew
Sent: Monday, October 02, 2017 5:11 PM
To: hercp ; HUMAN CONNECTOME
Subject: Re: [HCP-Users] rfMRI data files

Yes you ideally would analyze all of the resting state fMRI runs per subject.  
They have different phase encoding directions, so you should always analyze an 
equal amount of each.  Be sure to demean and perhaps variance normalize prior 
to concatenating.

Peace.

[HCP-Users] Mean and variance normalization

2017-10-05 Thread hercp
I am extracting time series from regions of interest.  Matt Glasser suggested 
that I mean/variance-correct and concatenate the RL and LR phase encoded time 
series.  I still have a couple of questions.

1.  Is the concatenation over time?  If so, doesn’t this introduce temporal 
discontinuity?  Am I understanding the concatenation correctly?

2.  Would the outcome be equivalent whether I do the preprocessing to the 
original rfMRI file  OR to the ROI extracted time series and why? 

Thank you in advance for any suggestions you may offer.

Heracles Panagiotides, PhD



Heracles Panagiotides, PhD




From: Elam, Jennifer 
Sent: Wednesday, October 04, 2017 2:50 PM
To: hercp 
Subject: Re: [HCP-Users] Fw: rfMRI data files

Hi Heracles,

>From what I've heard from others who have more experience, I would do the 
>preprocessing and concatenating prior to extracting the ROI vector. The big 
>caveat is that I don't actually do any processing, etc. myself as my expertise 
>is in a different field. So, if you want to be sure you should ask your 
>question on the list and get a real expert answer there.



Cheers,

Jenn




Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org






From: hercp 
Sent: Wednesday, October 4, 2017 4:12:39 PM
To: Elam, Jennifer
Subject: Re: [HCP-Users] Fw: rfMRI data files 

Hi Jenn,

Thank you so much the reply and pointing at FAQ #3.  

I am wondering if we are talking about the same thing when referring to “time 
series”.   Perhaps, if I tell you what I have done so far, my question will be 
more clear:  I have defined a ROI and extracted a time series from the original 
data file; the time series is a single vector corresponding to the mean lever 
of activity of that ROI .  So, do I apply the mean and variance normalization 
to this vector and then concatenate the vectors, or do I do all this 
(preprocessing and concatenating) prior to extracting the ROI vector time 
series.   (As a side note, I can do all this vector preprocessing in Matlab.)  
Would these two approaches be equivalent?

Thank you very much for being so helpful,
Heracles Panagiotides, PhD




From: Elam, Jennifer 
Sent: Wednesday, October 04, 2017 1:30 PM
To: hercp ; HUMAN CONNECTOME 
Subject: Re: [HCP-Users] Fw: rfMRI data files

Hi Heracles,

FAQ #3 on the HCP-Users FAQ might help you do what Matt is suggesting.



Best,

Jenn




Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org






From: hcp-users-boun...@humanconnectome.org 
 on behalf of hercp 
Sent: Wednesday, October 4, 2017 1:32:38 PM
To: HUMAN CONNECTOME
Subject: [HCP-Users] Fw: rfMRI data files 

Does anyone know how the concatenation (see discussion below) of the ROI 
extracted time series needs to happen?  Do I simply concatenate the time series 
as a temporal sequence, rfMRI_REST2_LR followed by rfMRI_REST2_RL ?   

Thanks again for the kind help.
Heracles Panagiotides, PhD




From: hercp 
Sent: Tuesday, October 03, 2017 5:15 AM
To: Glasser, Matthew 
Subject: Re: [HCP-Users] rfMRI data files

Thanks for the kind reply, Matt.  Let me make sure that I understand the 
process.  I should extract the time series from regions of interest from each 
phase encoded file.  Then I should concatenate the time series demeaning and 
variance normalizing?  Am I way off?  Sorry about the naïve questions, but I am 
a couple of decades behind in MR work.  

Thanks,
Heracles Panagiotides, PhD




From: Glasser, Matthew 
Sent: Monday, October 02, 2017 5:11 PM
To: hercp ; HUMAN CONNECTOME 
Subject: Re: [HCP-Users] rfMRI data files

Yes you ideally would analyze all of the resting state fMRI runs per subject.  
They have different phase encoding directions, so you should always analyze an 
equal amount of each.  Be sure to demean and perhaps variance normalize prior 
to concatenating.

Peace.

Matt.

From:  on behalf of hercp 
Date: Tuesday, October 3, 2017 at 9:09 AM
To: HUMAN CONNECTOME 
Subject: [HCP-Users] rfMRI data files


Pardon my ignorance, but could someone give me a brief explanation of the 
difference between the rfMRI_REST2_LR and rfMRI_REST2_RL rfMRI data.  Should I 
be using both of them?

Thanks,
Heracles Panagiotides, PhD



___
HCP-Users mailing list
HCP-Users@humanconnectome.org

Re: [HCP-Users] rfMRI data files

2017-10-05 Thread Glasser, Matthew
Basically you can think of it as an SNR bias.

Peace,

Matt.

From: 
>
 on behalf of "Elam, Jennifer" >
Date: Wednesday, October 4, 2017 at 4:17 PM
To: Romuald Janik >, 
"hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] rfMRI data files


Hi Romuald,

The phase encoding direction will affect areas of signal dropout in the images. 
This is discussed in the Smith et al. 2013 paper on Resting State fMRI in 
HCP, so you might start there.


Best,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org



From: 
hcp-users-boun...@humanconnectome.org
 
>
 on behalf of Romuald Janik 
>
Sent: Wednesday, October 4, 2017 2:57:22 PM
To: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] rfMRI data files

Hi,

Just to follow up on your answer and understand better - does the encoding 
direction introduce some bias (i.e. I mean some non neural component) into the 
signal? If it is necessary to analyze an equal amount of each direction that 
means that some effect cancels out? Is it known what it is?
Is there some place where one can read up on this? The (simple) descriptions of 
BOLD fMRI
that I encountered never went into such detail..

Best wishes,
Romuald



--

Message: 2
Date: Tue, 3 Oct 2017 00:11:35 +
From: "Glasser, Matthew" >
Subject: Re: [HCP-Users] rfMRI data files
To: hercp >, HUMAN CONNECTOME
>
Message-ID: 
>
Content-Type: text/plain; charset="us-ascii"

Yes you ideally would analyze all of the resting state fMRI runs per subject.  
They have different phase encoding directions, so you should always analyze an 
equal amount of each.  Be sure to demean and perhaps variance normalize prior 
to concatenating.

Peace.

Matt.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Distance between surface ROIs in MMP

2017-10-05 Thread Glasser, Matthew
Indeed I think we would need to know what you needed the distance for to know 
how best to compute it.  For things like MR artifacts, a 3D distance might be 
most appropriate.  For something like smoothing, a geodesic distance would be 
appropriate.  For something neurobiological, the tractography distance might be 
most appropriate.

Peace,

Matt.

From: 
>
 on behalf of Timothy Coalson >
Date: Tuesday, October 3, 2017 at 6:30 PM
To: "Gopalakrishnan, Karthik" >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] Distance between surface ROIs in MMP

Since ROIs are not points, distance between them becomes a trickier question.  
Since areas are connected through white matter rather than gray matter, that 
also implies that the easy ways to calculate distance may not be all that 
biologically relevant.  This would point to using tractography to find 
distances.  So, I don't think there is an easy answer, sorry.

If you want to compute distance along the gray matter anyway, a possibility is 
to find the center of gravity of each ROI, translate them back to surface 
vertices (the centers will not actually be on the surface anymore, so you may 
want to double check them), and then find geodesic distances between those 
points (you can use -surface-geodesic-distance, running it once per area - you 
can then get the values from the other vertices near the centers to build the 
all-to-all matrix a row at a time).  Note, however, that this will not give you 
a distance to areas in the other hemisphere.

Tim


On Tue, Oct 3, 2017 at 5:06 PM, Gopalakrishnan, Karthik 
> wrote:
Hi,

I’m working with the Glasser multi-modal parcellation and I’d like to know if 
there is some prevalent notion of distance between any two surface ROIs in the 
parcellation? If there is, could you please tell me how I could obtain it or 
point me to a source?

Thanks a lot!

Regards,
Karthik

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Surfaces, coordinates and beginner questions

2017-10-05 Thread Glasser, Matthew
Right we will recommend using the areal classifier to find these areas rather 
than the group parcellation once the areal classifier is available.

Peace,

Matt.

From: 
>
 on behalf of Timothy Coalson >
Date: Wednesday, October 4, 2017 at 4:36 PM
To: Romuald Janik >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] Surfaces, coordinates and beginner questions

This is not something that QC can catch, it requires extensive processing to 
discover this.  The individual subject parcellations that are generated during 
the MMP 1.0 process do reflect these differences, so these will be able to be 
used instead of the group label map when using HCP data.

I believe these issues affect relatively few areas and are somewhat rare, for 
instance 55b (and therefore also some of its neighbors) is one of the most 
affected areas, however 89% of subjects had the typical pattern for 55b.  See 
"Individuals with atypical areal patterns" supplemental information in the MMP 
paper:

https://www.ncbi.nlm.nih.gov/pubmed/27437579

Tim


On Wed, Oct 4, 2017 at 1:38 PM, Romuald Janik 
> wrote:
Thanks for the detailed explanations!

I have just one follow up question regarding this point:

On Tue, Oct 3, 2017 at 11:57 PM, Timothy Coalson 
> wrote:


Some techniques like hyperalignment use this "correspondence" concept 
aggressively, and allow you to have discontinuities (this area is on the 
"wrong" side of this other area in this subject - we have evidence that some 
subjects do actually have areas shifted or split), however MSMAll doesn't allow 
this (the constraints it imposes to prevent this make the problem easier to 
solve, and possibly more robust to noise).  When we talk about "spherical 
distortion", we are talking about artifacts of this "correspondence-finding" 
process, there is still no anatomical distortion caused to individual data.



Such splitting or shifting of areas would make comparisons with other subjects 
(like averaging statistics etc.) in principle problematic - also using the 
MMP1.0 parcellation for these subjects would be wrong for these areas. Are 
these subjects indicated in some way? (like having the QC_Issue codes A or B? 
or these problems are more subtle?)
Once again many thanks,
Romuald




___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users