Re: [HCP-Users] Phase Encoding left-to-right and right-to-left

2015-11-25 Thread Harms, Michael







To Steve's point and the issue of memory:  A critical distinction is whether you are intending to work with dense connectomes or parcellated connectomes.  In the context of parcellated connectomes, both Steve and myself have found a small advantage in
 reproducibility if you compute a parcellated "netmat" for each resting state run, convert those using r-to-z, and then average those across the 4 resting state runs for a subject (if you want as output a single parcellated netmat per subject).  In fact, that
 is how the netmats that are distributed as part of the "PTN" from ConnectomeDB were themselves created.


In the context of dense connectomes, generating a dense connectome per run is a different sort of beast.  You can do it (I've done it) using -cifti-correlation and then average with -cifti-average. But to my knowledge, no one has looked at whether there
 is a small reproducibility advantage to that approach as well with dense connectomes, which is why I think that most subject specific dense connectomes have probably been created via the 'concat' approach outlined on the wiki.


cheers,
-MH




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. 
Tel: 314-747-6173
St. Louis, MO  63110 
Email: mha...@wustl.edu







From: Timothy Coalson 
Date: Tuesday, November 24, 2015 4:23 PM
To: Greg Burgess 
Cc: "Elam, Jennifer" , "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left





If you are using -cifti-correlation, there is a -mem-limit option for this purpose, so there isn't a required minimum memory to do it (even in matlab where everything has to be in memory, the 90k x 4.8k timeseries input pales in comparison to
 the 90k x 90k output).  If you are doing everything in matlab, then the averaging of two 90k x 90k dconns is going to require more memory than any reasonable concatenated correlation.


The -cifti-average command should use almost no memory regardless of file size, as long as you don't overwrite one of the inputs with the output.


Tim




On Tue, Nov 24, 2015 at 2:23 PM, Greg Burgess 
 wrote:

I suggested below that Joelle could average Fisher’s z-transformed correlation coefficients (derived from each run within-subject), or treat the multiple runs as within-subjects repeated measures.

The idea was that computing correlations between timeseries with 4800 time points will take four times as much RAM as using only 1200 time points. For folks with limited RAM, averaging the correlation estimates may be a more feasible option.

--Greg


Greg Burgess, Ph.D.
Staff Scientist, Human Connectome Project
Washington University School of Medicine
Department of Anatomy and Neurobiology
Phone: 314-362-7864
Email: gburg...@wustl.edu



> On Nov 24, 2015, at 1:08 PM, Stephen Smith  wrote:
>
> I think maybe we need to be explicit about exactly what we're talking about averaging?
> Cheers.
>
> 
> Stephen M. Smith,  Professor of Biomedical Engineering
> Head of Analysis,   Oxford University FMRIB Centre
>
> FMRIB, JR Hospital, Headington,
> Oxford. OX3 9 DU, UK
> +44 (0) 1865 222726  (fax 222717)
> st...@fmrib.ox.ac.uk
> http://www.fmrib.ox.ac.uk/~steve
> --
>
>> On 24 Nov 2015, at 19:03, Greg Burgess  wrote:
>>
>> It’s less RAM-intensive since you only need to load one timeseries at a time.
>>
>> --Greg
>>
>> 
>> Greg Burgess, Ph.D.
>> Staff Scientist, Human Connectome Project
>> Washington University School of Medicine
>> Department of Anatomy and Neurobiology
>> Phone: 314-362-7864
>> Email: gburg...@wustl.edu
>>
>>> On Nov 24, 2015, at 12:29 PM, Glasser, Matthew  wrote:
>>>
>>> I'm not sure what benefit you'd get from averaging the FCs across runs within a subject.  That just sounds more computationally intensive.
>>>
>>> Peace,
>>>
>>> Matt.
>>>
>>>
>>> From: Joelle Zimmermann 
>>> Sent: Tuesday, November 24, 2015 11:55 AM
>>> To: Glasser, Matthew
>>> Cc: Greg Burgess; Elam, Jennifer; 
hcp-users@humanconnectome.org
>>> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
>>>
>>> Hi Matt,
>>>
>>> Glad you do point that out, because I was previously looking at the Resting State fMRI 1 Preprocessed, but the Resting State fMRI FIX-Denoised (Compact) is readily available. So I guess for that all I'll need to do is demean and variance normalize, and/or
 average the two FCs.
>>>
>>> Thanks,
>>> Joelle
>>>
>>> On Tue, Nov 24, 2015 at 12:19 PM, Glasser, Matthew  

Re: [HCP-Users] Qdec questions

2015-11-25 Thread Harms, Michael





I don't know if this is it, but off the top of my head, how exactly did you set up your SUBJECTS_DIR?  I assume that you are trying to aggregate stats into a table across a number of subjects after having downloaded the "Structural Extended" packages?
  The location of the FS data in the HCP file structure (/T1w/) isn't directly compatible with the expectation in FS tools that the subject level FS data will all reside within a common "SUBJECTS_DIR".  To get around that, you can make a single
 directory with symlinks to all the  directories.


If that's not the issue, then yes, the FS list is probably your best location for further assistance.


cheers,
-MH




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. 
Tel: 314-747-6173
St. Louis, MO  63110 
Email: mha...@wustl.edu







From: , Matthew 
Date: Wednesday, November 25, 2015 3:59 PM
To: levi solomyak , "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Qdec questions





You might want to cross-post this to the FreeSurfer list unless there is someone on the HCP list familiar with qdec, assuming that you’ve downloaded the structural extended packages.


Peace,


Matt.




From:  on behalf of levi solomyak 
Date: Wednesday, November 25, 2015 at 12:45 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Qdec questions





Dear experts,


I've been attempting to generate a stats data table in qdec, but each time I get a "Index Error: list index is out of range" error. This is occurring when it is trying to parse the .stats file. 


 I've made sure that the files all line up exactly with the qdec table and that SUBJECTS_DIR  is properly sourced so I'm not sure how to fix this problem. Any help would be greatly appreciated!


Thank you,
LS 




___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users





 



The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended
 recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone
 or return mail.
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users





 



The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended
 recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone
 or return mail.
___HCP-Users mailing listHCP-Users@humanconnectome.orghttp://lists.humanconnectome.org/mailman/listinfo/hcp-users




Re: [HCP-Users] phase encoding and group ICA

2015-11-25 Thread Harms, Michael







As a brief follow-up to this:  As you see from Steve's pasted code, the order of the time series in the PTN is based solely on the file name in ConnectomeDB.  Importantly, nothing in the REST{1,2}_{LR,RL} naming tells you whether the LR run came first,
 or vice-versa.  All you know for sure is that REST1 runs came before REST2.  (Most typically, a day beforehand, but a minority of subjects have multiple days between their REST1 and REST2 sessions).  While there is a prototypical temporal ordering of the runs,
 which would hold for the vast majority of subjects, on infrequent occasions that ordering might have been violated -- e.g., if a scan had to be reacquired.  


So, if you are investigating something that explicitly and critically depends on knowing the exact order in which the LR/RL runs were acquired for each and every subject, then you will have to pull the series acquisition times from the database.  There
 is a previous post to the list with details on how to do that.  Off the top of my head, I can't remember if that post included a mechanism to also get the number of days between the REST1 and REST2 sessions.  We've talked about making those sort of details
 more readily available, but it hasn't been a priority.


cheers,
-MH




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. 
Tel: 314-747-6173
St. Louis, MO  63110 
Email: mha...@wustl.edu







From: Stephen Smith 
Date: Wednesday, November 25, 2015 1:18 AM
To: Mary Beth 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] phase encoding and group ICA





Hi - 


Wrt group-ICA, it makes no difference what order the data was fed into the MIGP group-PCA.


Wrt the 4 chunks combined into the node timeseries in the 500-subject PTN release:  the order is always the following, so you will need to take into account the information you pasted below if you want to know about how this interacts with session
 orderings.



ff{1}=sprintf('%s/%d/RESOURCES/rfMRI_REST1_LR_FIX/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{2}=sprintf('%s/%d/RESOURCES/rfMRI_REST1_RL_FIX/rfMRI_REST1_RL/rfMRI_REST1_RL_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{3}=sprintf('%s/%d/RESOURCES/rfMRI_REST2_LR_FIX/rfMRI_REST2_LR/rfMRI_REST2_LR_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{4}=sprintf('%s/%d/RESOURCES/rfMRI_REST2_RL_FIX/rfMRI_REST2_RL/rfMRI_REST2_RL_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);



Cheers, Steve.


ps - for the 900 PTN release we will be releasing the code used to generate it.









On 25 Nov 2015, at 01:17, Mary Beth  wrote:


The subject-specific parcel timeseries were provided in a single text file for each model order with 4800 time points each. I am trying to figure out if time points 2401-3600 from those text files always correspond to the LR session from
 Day 2, or if they correspond to the RL session from Day 2 for the handful of subjects collected before  1 October 2012 and the LR session from Day 2 for everyone else. I just want to make sure that I label everything correctly.
I hope this clarifies my question. Thanks again for your help.
-mb


On Tue, Nov 24, 2015, 19:29 Glasser, Matthew  wrote:



I’m still not following why it matters.


Peace,


Matt.




From: Mary Beth 
Date: Tuesday, November 24, 2015 at 6:18 PM
To: Matt Glasser , "hcp-users@humanconnectome.org"
 
Subject: Re: [HCP-Users] phase encoding and group ICA








I just want to make sure I'm comparing apples to apples when I use the subject-specific timeseries. 


best,
mb



On Tue, Nov 24, 2015 at 7:11 PM Glasser, Matthew  wrote:



Why does the order of the individual subject sessions matter for group ICA or PTNs?


Peace,


Matt.




From:  on behalf of Mary Beth 
Date: Tuesday, November 24, 2015 at 5:59 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] phase encoding and group ICA












Hi all,

According to this page 
http://www.humanconnectome.org/documentation/Q1/data-in-this-release.html, prior to 1 October 2012, the first resting state session of each visit was acquired with RL phase encoding, and the second session was acquired with LR phase encoding (RL/LR).  After
 this date, the first visit continued to be acquired in the RL/LR order, but the second visit was acquired in the opposite order, with the LR acquisition followed by the RL acquisition (LR/RL).


Here's my question: prior to group ICA and the generation of subject-specific sets of node timeseries in the August, 2014 “HCP500-PTN” 

Re: [HCP-Users] Phase Encoding left-to-right and right-to-left

2015-11-25 Thread Harms, Michael







Ok, but I thought we were talking about correlations here?  Another reason for Joelle to be explicit about what she is wanting to do.




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. 
Tel: 314-747-6173
St. Louis, MO  63110 
Email: mha...@wustl.edu







From: Stephen Smith 
Date: Wednesday, November 25, 2015 11:34 AM
To: "Harms, Michael" 
Cc: Timothy Coalson , Greg Burgess , "Elam, Jennifer" ,
 "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left





Hi Michael - for raw covariances - summing covariances is equivalent to temp concat first.
Cheers







On 25 Nov 2015, at 17:29, Harms, Michael  wrote:







To Steve's point and the issue of memory:  A critical distinction is whether you are intending to work with dense connectomes or parcellated connectomes.  In the context of parcellated connectomes, both Steve and myself have found a small advantage
 in reproducibility if you compute a parcellated "netmat" for each resting state run, convert those using r-to-z, and then average those across the 4 resting state runs for a subject (if you want as output a single parcellated netmat per subject).  In fact,
 that is how the netmats that are distributed as part of the "PTN" from ConnectomeDB were themselves created.


In the context of dense connectomes, generating a dense connectome per run is a different sort of beast.  You can do it (I've done it) using -cifti-correlation and then average with -cifti-average. But to my knowledge, no one has looked at whether
 there is a small reproducibility advantage to that approach as well with dense connectomes, which is why I think that most subject specific dense connectomes have probably been created via the 'concat' approach outlined on the wiki.


cheers,
-MH




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email:
mha...@wustl.edu







From: Timothy Coalson 
Date: Tuesday, November 24, 2015 4:23 PM
To: Greg Burgess 
Cc: "Elam, Jennifer" , "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left





If you are using -cifti-correlation, there is a -mem-limit option for this purpose, so there isn't a required minimum memory to do it (even in matlab where everything has to be in memory, the 90k x 4.8k timeseries input pales in comparison
 to the 90k x 90k output).  If you are doing everything in matlab, then the averaging of two 90k x 90k dconns is going to require more memory than any reasonable concatenated correlation.


The -cifti-average command should use almost no memory regardless of file size, as long as you don't overwrite one of the inputs with the output.


Tim




On Tue, Nov 24, 2015 at 2:23 PM, Greg Burgess 
 wrote:

I suggested below that Joelle could average Fisher’s z-transformed correlation coefficients (derived from each run within-subject), or treat the multiple runs as within-subjects repeated measures.

The idea was that computing correlations between timeseries with 4800 time points will take four times as much RAM as using only 1200 time points. For folks with limited RAM, averaging the correlation estimates may be a more feasible option.

--Greg


Greg Burgess, Ph.D.
Staff Scientist, Human Connectome Project
Washington University School of Medicine
Department of Anatomy and Neurobiology
Phone: 314-362-7864
Email: gburg...@wustl.edu



> On Nov 24, 2015, at 1:08 PM, Stephen Smith  wrote:
>
> I think maybe we need to be explicit about exactly what we're talking about averaging?
> Cheers.
>
> 
> Stephen M. Smith,  Professor of Biomedical Engineering
> Head of Analysis,   Oxford University FMRIB Centre
>
> FMRIB, JR Hospital, Headington,
> Oxford. OX3 9 DU, UK
> +44 (0) 1865 222726  (fax 222717)
> st...@fmrib.ox.ac.uk
> 
http://www.fmrib.ox.ac.uk/~steve
> --
>
>> On 24 Nov 2015, at 19:03, Greg Burgess  wrote:
>>
>> It’s less RAM-intensive since you only need to load one timeseries at a time.
>>
>> --Greg
>>
>> 
>> Greg Burgess, Ph.D.
>> Staff Scientist, Human Connectome Project
>> Washington University School of Medicine
>> Department of Anatomy and Neurobiology

Re: [HCP-Users] CIFTI and NIFTI -parcellating rfMRI

2015-11-25 Thread Timothy Coalson
Because the surface data in our Cifti files is mapped from the volume using
subject-specific surfaces, it avoids mixing csf and white matter data into
the cortical signal.  With a volume parcellation, in order to do something
similar, you would need to parcellate each individual somehow, or have an
overly generous parcellation and mask it with the per-subject cortical
segmentation.

Indeed, putting surface and volume data into one file is the main feature
of the Cifti format (though it also has additional mapping types, notably
being able to represent a parcellated file in a way that is easy to use for
visualization).  Cifti also allows the exclusion of vertices and voxels
that we aren't interested in (medial wall, white matter, csf, skull, air,
etc), which means that some translation is needed in order to use spatial
relationships between elements.  If you want the gory details, suitable for
those implementing their own reader/writer for cifti in other programming
languages, see the NITRC page:

http://www.nitrc.org/projects/cifti/

The good news is, if you have a parcellation you want to use, and can get
it into a cifti label file (see -volume-label-to-surface-mapping
and -cifti-create-label), then you can easily parcellate a cifti file with
the "wb_command -cifti-parcellate" command.

Tim


On Wed, Nov 25, 2015 at 2:53 PM, Harms, Michael  wrote:

>
> If you want connectivity matrices based on a parcellation that respects
> the cortical surface, then you'll need to use the CIFTI data, and you'll
> need a parcellation to go along with it that is also based on the surfaces
> (for the cortical grayordinates).  We would definitely suggest that you go
> this route, since much of the HCP effort, from acquisition through
> processing, is designed around surface-based analyses of the cortical data.
>
> Also, you probably want to use the "FIX" cleaned data, which has "*clean*"
> in the file name, rather than just the minimally pre-processed
> .dtseries.nii.
>
> I think the basic file types are well documented in the release
> documentation.
>
> cheers,
> -MH
>
> --
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave. Tel: 314-747-6173
> St. Louis, MO  63110 Email: mha...@wustl.edu
>
> From: Joelle Zimmermann 
> Date: Wednesday, November 25, 2015 2:40 PM
> To: Timothy Coalson 
> Cc: "hcp-users@humanconnectome.org" 
> Subject: Re: [HCP-Users] CIFTI and NIFTI -parcellating rfMRI
>
> Hi Tim,
>
> Thanks for your response. Within the Resting State fMRI 1 preprocessed in
> the 500 Subjects + MEG2, I see the rfMRI_REST1_LR.nii.gz, which I presume
> is the Nifti-1 data. And then I see an rfMRI_REST1_LR_Atlas.dtseries.nii  -
> Is that the dtseries.nii Cifti you refer to? Could you explain a bit more
> about what's special about the Cifti file? I understand it Cifti is able to
> handle  surface vertices and subcortical voxels in one file (i.e. gray
> ordinates).
>
> My ultimate goal is to parcellate the brain into cortical regions of
> interest, in order to ultimately look at regional functional connectivity
> matrices. Do I need the dtseries.nii cifti for that? It seems like the
> advantage of that is having surface vertices (i.e. knowing the precise
> shape of the surface of the brain) - which I don't think I need for a
> standard FC matrix...
>
> Thanks,
> Joelle
>
> On Tue, Nov 24, 2015 at 6:16 PM, Timothy Coalson  wrote:
>
>> The .nii.gz files are nifti-1 volumes, while the .dtseries.nii files are
>> Cifti files.  If you need spatial information, converting Cifti to nifti-1
>> is not the way to go about it, instead you could use wb_command
>> -cifti-separate into metric (.func.gii, one per hemisphere) and volume
>> files, or use whatever -cifti-* commands are applicable to what you want to
>> do spatially.
>>
>> Tim
>>
>>
>> On Tue, Nov 24, 2015 at 11:14 AM, Joelle Zimmermann <
>> joelle.t.zimmerm...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I'm working with the 500subjects + MEG2 preprocessed Resting State fMRI
>>> 1 Preprocessed dataset.
>>>
>>> Is the data just standard voxel-wise Nifti (it indeed looks like
>>> Nifti-1), or is it in a Cifti format that I need to convert to Nifti? My
>>> goal is to parcellate this data into ROI's, and I have a script that does
>>> this based on voxel-wise data, but as I understand Cifti is in gray
>>> ordinates.
>>>
>>> Thanks,
>>> Joelle
>>>
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>
>>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> 

Re: [HCP-Users] CIFTI and NIFTI -parcellating rfMRI

2015-11-25 Thread Joelle Zimmermann
Hi Tim,

Thanks for your response. Within the Resting State fMRI 1 preprocessed in
the 500 Subjects + MEG2, I see the rfMRI_REST1_LR.nii.gz, which I presume
is the Nifti-1 data. And then I see an rfMRI_REST1_LR_Atlas.dtseries.nii  -
Is that the dtseries.nii Cifti you refer to? Could you explain a bit more
about what's special about the Cifti file? I understand it Cifti is able to
handle  surface vertices and subcortical voxels in one file (i.e. gray
ordinates).

My ultimate goal is to parcellate the brain into cortical regions of
interest, in order to ultimately look at regional functional connectivity
matrices. Do I need the dtseries.nii cifti for that? It seems like the
advantage of that is having surface vertices (i.e. knowing the precise
shape of the surface of the brain) - which I don't think I need for a
standard FC matrix...

Thanks,
Joelle

On Tue, Nov 24, 2015 at 6:16 PM, Timothy Coalson  wrote:

> The .nii.gz files are nifti-1 volumes, while the .dtseries.nii files are
> Cifti files.  If you need spatial information, converting Cifti to nifti-1
> is not the way to go about it, instead you could use wb_command
> -cifti-separate into metric (.func.gii, one per hemisphere) and volume
> files, or use whatever -cifti-* commands are applicable to what you want to
> do spatially.
>
> Tim
>
>
> On Tue, Nov 24, 2015 at 11:14 AM, Joelle Zimmermann <
> joelle.t.zimmerm...@gmail.com> wrote:
>
>> Hi all,
>>
>> I'm working with the 500subjects + MEG2 preprocessed Resting State fMRI
>> 1 Preprocessed dataset.
>>
>> Is the data just standard voxel-wise Nifti (it indeed looks like
>> Nifti-1), or is it in a Cifti format that I need to convert to Nifti? My
>> goal is to parcellate this data into ROI's, and I have a script that does
>> this based on voxel-wise data, but as I understand Cifti is in gray
>> ordinates.
>>
>> Thanks,
>> Joelle
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Qdec questions

2015-11-25 Thread Glasser, Matthew
You might want to cross-post this to the FreeSurfer list unless there is 
someone on the HCP list familiar with qdec, assuming that you've downloaded the 
structural extended packages.

Peace,

Matt.

From: 
>
 on behalf of levi solomyak >
Date: Wednesday, November 25, 2015 at 12:45 PM
To: "hcp-users@humanconnectome.org" 
>
Subject: [HCP-Users] Qdec questions

Dear experts,

I've been attempting to generate a stats data table in qdec, but each time I 
get a "Index Error: list index is out of range" error. This is occurring when 
it is trying to parse the .stats file.

 I've made sure that the files all line up exactly with the qdec table and that 
SUBJECTS_DIR  is properly sourced so I'm not sure how to fix this problem. Any 
help would be greatly appreciated!

Thank you,
LS



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Phase Encoding left-to-right and right-to-left

2015-11-25 Thread Stephen Smith
Hi Michael - for raw covariances - summing covariances is equivalent to temp 
concat first.
Cheers



> On 25 Nov 2015, at 17:29, Harms, Michael  wrote:
> 
> 
> To Steve's point and the issue of memory:  A critical distinction is whether 
> you are intending to work with dense connectomes or parcellated connectomes.  
> In the context of parcellated connectomes, both Steve and myself have found a 
> small advantage in reproducibility if you compute a parcellated "netmat" for 
> each resting state run, convert those using r-to-z, and then average those 
> across the 4 resting state runs for a subject (if you want as output a single 
> parcellated netmat per subject).  In fact, that is how the netmats that are 
> distributed as part of the "PTN" from ConnectomeDB were themselves created.
> 
> In the context of dense connectomes, generating a dense connectome per run is 
> a different sort of beast.  You can do it (I've done it) using 
> -cifti-correlation and then average with -cifti-average. But to my knowledge, 
> no one has looked at whether there is a small reproducibility advantage to 
> that approach as well with dense connectomes, which is why I think that most 
> subject specific dense connectomes have probably been created via the 
> 'concat' approach outlined on the wiki.
> 
> cheers,
> -MH
> 
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu
> 
> From: Timothy Coalson >
> Date: Tuesday, November 24, 2015 4:23 PM
> To: Greg Burgess >
> Cc: "Elam, Jennifer" >, 
> "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
> 
> If you are using -cifti-correlation, there is a -mem-limit option for this 
> purpose, so there isn't a required minimum memory to do it (even in matlab 
> where everything has to be in memory, the 90k x 4.8k timeseries input pales 
> in comparison to the 90k x 90k output).  If you are doing everything in 
> matlab, then the averaging of two 90k x 90k dconns is going to require more 
> memory than any reasonable concatenated correlation.
> 
> The -cifti-average command should use almost no memory regardless of file 
> size, as long as you don't overwrite one of the inputs with the output.
> 
> Tim
> 
> 
> On Tue, Nov 24, 2015 at 2:23 PM, Greg Burgess  > wrote:
>> I suggested below that Joelle could average Fisher’s z-transformed 
>> correlation coefficients (derived from each run within-subject), or treat 
>> the multiple runs as within-subjects repeated measures.
>> 
>> The idea was that computing correlations between timeseries with 4800 time 
>> points will take four times as much RAM as using only 1200 time points. For 
>> folks with limited RAM, averaging the correlation estimates may be a more 
>> feasible option.
>> 
>> --Greg
>> 
>> 
>> Greg Burgess, Ph.D.
>> Staff Scientist, Human Connectome Project
>> Washington University School of Medicine
>> Department of Anatomy and Neurobiology
>> Phone: 314-362-7864 
>> Email: gburg...@wustl.edu 
>> 
>> > On Nov 24, 2015, at 1:08 PM, Stephen Smith > > > wrote:
>> >
>> > I think maybe we need to be explicit about exactly what we're talking 
>> > about averaging?
>> > Cheers.
>> >
>> > 
>> > Stephen M. Smith,  Professor of Biomedical Engineering
>> > Head of Analysis,   Oxford University FMRIB Centre
>> >
>> > FMRIB, JR Hospital, Headington,
>> > Oxford. OX3 9 DU, UK
>> > +44 (0) 1865 222726   (fax 222717)
>> > st...@fmrib.ox.ac.uk 
>> > http://www.fmrib.ox.ac.uk/~steve 
>> > --
>> >
>> >> On 24 Nov 2015, at 19:03, Greg Burgess > >> > wrote:
>> >>
>> >> It’s less RAM-intensive since you only need to load one timeseries at a 
>> >> time.
>> >>
>> >> --Greg
>> >>
>> >> 
>> >> Greg Burgess, Ph.D.
>> >> Staff Scientist, Human Connectome Project
>> >> Washington University School of Medicine
>> >> Department of Anatomy and Neurobiology
>> >> Phone: 314-362-7864 
>> >> Email: gburg...@wustl.edu 
>> >>
>> >>> On Nov 24, 2015, at 12:29 PM, Glasser, Matthew > >>>