Re: [HCP-Users] Workbench suddenly closes itself

2014-04-07 Thread Stephen Smith
Hi - have you tried using the non rh version?   That *might* solve it...
Cheers


On 7 Apr 2014, at 15:44, Mahshid Najafi mhs...@gmail.com wrote:

 Hi,
 I have a problem with workbench. Sometimes when I try to load a label map or 
 time series data in the workbench, it suddenly gives following error and 
 closes itself:
 
 line 16: 15130 Segmentation fault  
 $directory/../exe_rh_linux64/workbench $@
 
 Would you please let me know what possibly causes this problem?
 
 Thanks very much,
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] High-resolution resting state networks available as template?

2014-04-19 Thread Stephen Smith
Indeed - and note that it is quick and easy to feed the group-PCA eigenmaps 
available on ConnectomeDB into MELODIC to get group-ICA maps.
Cheers.



On 18 Apr 2014, at 22:45, Glasser, Matthew glass...@wusm.wustl.edu wrote:

 We haven¹t released any HCP templates yet (at least as far as I know), but
 there are several issues worth thinking about here:
 
 1) Other templates are blurry mainly because spatial smoothing was applied
 to them and cross-subject alignment was achieved with registration that
 does not align cortical areas on the surface.  That means that going to 7T
 or higher resolutions won¹t really help you if you are not also avoiding
 smoothing your data and taking care to align it properly across subjects.
 2) Internally, we have seen that it is possible to get quite less blurred
 group average RSN maps (or task fMRI maps, etc) if you do your
 registration on the surface using features that are closely related to
 cortical areas (like myelin, resting state networks, or task fMRI maps),
 rather than cortical folding patterns.  While we do want to release these
 data publicly, most of the analyses are not yet published.  You can get a
 head start and a substantial portion of the benefits by using the CIFTI
 data and not smoothing.
 
 Peace,
 
 Matt.
 
 On 4/18/14, 3:10 PM, TMS-Studie t...@mailbox.tu-dresden.de wrote:
 
 Dear HCPers
 
 on this HCP webpage
 
 https://humanconnectome.org/about/project/pulse-sequences.html
 
 high resolution resting state networks are shown (figure 2). As I
 searched the web I only found 2 studies using 7T for rs fmri
 investigation which aren't related to the HCP. Could you tell who is
 the author of the (preliminary) study results?
 
 And might it be possible to get the results as nifti templates? Other
 resting state templates are fairly blurry and so it would be better to
 cross-check the own results with such high resolution results.
 
 Thanks,
 Matthias
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 
 
 The materials in this message are private and may contain Protected 
 Healthcare Information or other information of a sensitive nature. If you are 
 not the intended recipient, be advised that any unauthorized use, disclosure, 
 copying or the taking of any action in reliance on the contents of this 
 information is strictly prohibited. If you have received this email in error, 
 please immediately notify the sender via telephone or return mail.
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Creating Group Average Resting State fMRI

2014-05-20 Thread Stephen Smith
Hi - One important point - you should never temporally concatenate without 
first demeaning the individual timeseries.
Cheers.



On 20 May 2014, at 10:14, Ausaf Bari aus...@gmail.com wrote:

 Ok that is something I tried recently actually - to merge all the runs across 
 all the subjects and then do a correlation with fisher-z transform. The 
 results didn't feel right in that they didn't look like typical resting 
 state images. That is why I wanted to double check I was doing it the right 
 way. 
 
 As an alternative could I merge the within subject runs together with -merge 
 and do a within-subject correlation and then average across subjects? Is 
 there any downside to this other than the huge amount of storage space that 
 would be required (32GB per subject)?
 
 -Ausaf
 
 
 On Mon, May 19, 2014 at 10:04 PM, Glasser, Matthew glass...@wusm.wustl.edu 
 wrote:
 The simplest way is to concatenate the timeseries with wb_command 
 -cifti-merge.  You could do this for all runs of all subjects, or all runs 
 for each subject (and then z-transform and average the connectomes across 
 subjects).  There will be a better way in the future.
 
 Peace,
 
 Matt.
 
 From: Timothy Coalson tsc...@mst.edu
 Date: Monday, May 19, 2014 at 10:48 PM
 To: Ausaf Bari aus...@gmail.com
 Cc: hcp-users@humanconnectome.org hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] Creating Group Average Resting State fMRI
 
 Averaging together timepoints does not make sense for resting state data, it 
 does not use stimuli, so there is no temporal correspondence between runs or 
 across subjects.  You should do the correlation before any averaging.
 
 I'm not sure what method our pipelines use, so I'll let someone else respond 
 as to what the recommended method is (you probably don't need to correlate 
 each file by itself, which would use up lots of disk space).
 
 Tim
 
 
 
 On Mon, May 19, 2014 at 8:22 PM, Ausaf Bari aus...@gmail.com wrote:
 I am trying to create a resting state fMRI average over 10 subjects to view 
 in connectome workbench. For each HCP subject, I found the following 4 files:
 
 rfMRI_REST1_RL_Atlas_hp2000_clean.dtseries.nii
 rfMRI_REST1_LR_Atlas_hp2000_clean.dtseries.nii
 rfMRI_REST2_RL_Atlas_hp2000_clean.dtseries.nii
 rfMRI_REST2_LR_Atlas_hp2000_clean.dtseries.nii
 
 So for 10 subjects, I will have 40 total files. Am I correct to assume that I 
 can use the command line tool wb_command -cifti-average to average over all 
 40 files and then wb_command -cifti-correlation to create a dense 
 connectome?
 
 -Ausaf
 
 
 -- 
 Ausaf A. Bari MD PhD
 Resident Physician
 UCLA Medical Center
 Department of Neurosurgery
 
 Email: aus...@gmail.com
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 
  
 
 The materials in this message are private and may contain Protected 
 Healthcare Information or other information of a sensitive nature. If you are 
 not the intended recipient, be advised that any unauthorized use, disclosure, 
 copying or the taking of any action in reliance on the contents of this 
 information is strictly prohibited. If you have received this email in error, 
 please immediately notify the sender via telephone or return mail.
 
 
 
 
 -- 
 Ausaf A. Bari MD PhD
 Resident Physician
 UCLA Medical Center
 Department of Neurosurgery
 
 Phone: 617-642-1929
 Email: aus...@gmail.com
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Questions about FIX pre-processed rfMRI-data

2014-09-02 Thread Stephen Smith
Hi

On 2 Sep 2014, at 10:28, David Hofmann davidhofma...@gmail.com wrote:

  after taking a look into the HCP related papers, I decided to download the 
  FIX/pre-processed rfMRI-data for 1 subject (100307_3T_rfMRI_REST_fix). I 
  want the ready-to-analyze-pre-processed data for the first session and 
  assume the volumetric data is provided via the fix dataset and the 
  following nifti-files (only first rfMRI-session):
 
  -rfMRI_REST1_LR_hp2000_clean.nii
  -rfMRI_REST1_RL_hp2000_clean.nii
 
 no - these are the grayordinate CIFTI files combining cortical vertices and 
 noncortical voxels.
 rfMRI_REST1_LR_hp2000_clean.nii.gz
 etc.
 
 I'm sorry, I made a mistake with the names of the dataset. I used the fix 
 extended dataset (100307_3T_rfMRI_REST1_fixextended). So, just to be sure, 
 the nifti-files are in:
 
 - 
 ...fixextended\MNINonLinear\Results\rfMRI_REST1_RL\rfMRI_REST1_RL_hp2000_clean.nii.gz
 - 
 ...fixextended\MNINonLinear\Results\rfMRI_REST1_LR\rfMRI_REST1_LR_hp2000_clean.nii.gz
 
 After unzipping the above files:
 
 -rfMRI_REST1_RL_hp2000_clean.nii
 -rfMRI_REST1_LR_hp2000_clean.nii
 
 I assume this have to be the correct NIFTI-files for the first session.

apologies for the confusion - yes, these are the volumetric files.  
The grayordinate files are differently named - e.g.  
rfMRI_REST1_LR_Atlas_hp2000_clean.dtseries.nii

Btw - there's no need to unzip these volumetric NIFTI files - at least for FSL 
usage.

  in short, I want to handle the data like a pre-processed nii-file ready to 
  extract the voxel/ROI time course data.
 
  since I'm still a little uncertain about how to proceed and if I have the 
  correct dataset the following questions arise:
 
  1. Is it necessary to combine/concatenate LR and RL - data? If so, is 
  there a way to combine them via the workbench?
 
 One way or another you will want to combine them, yes.  In the upcoming 
 release the 4 runs for a subject are already concatenated inside the 
 node-timeseries files.
 
 Until then, I understand I have to normalize (demean) and then combine the 
 LR,RL-files and this can be done for example with Matlab by extracting the 
 voxel time courses and concatenating them. Is this the correct way or is 
 there an easier way by using the workbench tool?

Doing that in matlab is fine, yes.   I suspect that you can do it with the 
workbench tool, but I'm not so familiar with that.  Workbench command line tool 
is more oriented towards CIFTI/grayordinate representations than volumetric.

  4. Is there a mask-file for the localization of voxels? I only found a 
  mask-file in the 100307_3T_rfMRI_REST1_preproc dataset in which there 
  seems to be no correction of motion and physiologic artefacts(?)
 
 Not sure what you mean by a mask file...
 
 For example like the file: brainmask_fs.2.nii.gz provided in the 
 preproc-dataset in order to localize the cortical and subcortical voxel 
 coordinates. I can't find such a file provied in the fix extended dataset.

The volumetric datasets are already in 2mm MNI152 standard space, so (e.g.) 
atlases provided with FSL may be helpful depending on exactly what you're 
wanting to do.
All subjects' volumetric datasets should be in the same space.

Cheers.

---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Dividing node timeseries

2014-09-03 Thread Stephen Smith
Hi

On 3 Sep 2014, at 16:29, David Hofmann davidhofma...@gmail.com wrote:

 Hi at all,
 
 I was wondering, if it is possible to split a node time series txt-file 
 including all sessions (4800 time points) in 4 parts (first 1200 time points 
 are the first scan and so on) to obtain the time series of the individual 
 sessions? 

Yes - it's as easy as that.

Cheers


 
 if not, is there any way to obtain/extract those time series for the scans 
 seperately?
 
 greetings
 
 David
 
 -- 
 David Hofmann (M.Sc.)
 Institut für Medizinische Psychologie und Systemneurowissenschaften
 Von-Esmarch-Strasse 52
 48149 Münster
 Tel: 0251/ 8352794
 Mobil: 0152/ 09822352
 Email: davidhofm...@uni-muenster.de
 
 Institute of Medical Psychology and Systems Neuroscience
 University of Muenster
 Von-Esmarch-Str. 52
 D-48149 Muenster, Germany
 Phone: +49 (0) 251 - 83 52794
 Mobile: +49 (0) 152 - 09822352
 E-Mail: davidhofm...@uni-muenster.de
 
 http://campus.uni-muenster.de/medpsych.html
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradient unwarp in preFS script

2014-10-16 Thread Stephen Smith
Hi guys - if the brain is centred on isocentre the gradwarp on a Skyra should 
be about 1-2mm max within the brain.
Cheers, Steve.




On 16 Oct 2014, at 16:16, Harms, Michael mha...@wustl.edu wrote:

 
 In that case, I don't know what the degree of gradient nonlinearity is for a 
 stock Skyra.  But I'd try to get the gradient nonlinearity correction working 
 nonetheless.  You might have to do some more sleuthing to narrow down where 
 the error is arising before someone here can attempt to assist you.
 
 Namely, which image is this referring to?
 Image Exception : #22 :: ERROR: Could not open image 
 
 
 cheers,
 -MH
 
 -- 
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave.  Tel: 314-747-6173
 St. Louis, MO  63110  Email: mha...@wustl.edu
 
 From: Book, Gregory gregory.b...@hhchealth.org
 Date: Thursday, October 16, 2014 10:11 AM
 To: Harms, Michael mha...@wustl.edu, hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org
 Subject: RE: [HCP-Users] gradient unwarp in preFS script
 
 I’m processing our own raw data, collected on our stock Skyra. Our data was 
 collected using the HCP sequence recommendation.
 -G
  
 From: Harms, Michael [mailto:mha...@wustl.edu] 
 Sent: Thursday, October 16, 2014 11:08 AM
 To: Book, Gregory; hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] gradient unwarp in preFS script
  
  
 Hi Greg,
 Are you trying to re-process the HCP unprocessed data through the HCP 
 pipelines?  If so, the grad coefficients from a stock Skyra scanner will 
 not be the correct ones for the HCP Connectom scanner (which has a completely 
 different, customized gradient set, from a stock Skyra).
  
 In general, we recommend applying the gradient unwarping, as it can only help 
 bring everything into true alignment.  That said, the gradient nonlinearities 
 are a particular issue for the Connectom scanner, and would be less of an 
 issue for say a Trio.
  
 cheers,
 -MH
  
 -- 
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave. Tel: 314-747-6173
 St. Louis, MO  63110 Email: mha...@wustl.edu
  
 From: Book, Gregory gregory.b...@hhchealth.org
 Date: Thursday, October 16, 2014 10:00 AM
 To: hcp-users@humanconnectome.org hcp-users@humanconnectome.org
 Subject: [HCP-Users] gradient unwarp in preFS script
  
 I’m continuing to fine-tune our preFS script for our data, and I’m running 
 into trouble with the gradient unwarping step. I’ve obtained the .grad file 
 from our Skyra scanner and running the HCP with it. It runs the gradient 
 unwarp step, but seems to fail right after. If I set the 
 GradientDistortionCoeffs variable in the script to “NONE”, the entire script 
 runs without error.
  
 I’m also wondering if this gradient unwarping step is necessary?
  
  
  
 Here is the output from the console:
  
 gradunwarp-INFO: Parsing /opt/HCP/coeff_AS098.grad for harmonics coeffs
 gradunwarp-INFO: Evaluating spherical harmonics
 gradunwarp-INFO: on a 60^3 grid
 gradunwarp-INFO: with extents -300.0mm to 300.0mm
 gradunwarp-INFO: along x...
 gradunwarp-INFO: along y...
 gradunwarp-INFO: along z...
 gradunwarp-INFO: Evaluating the jacobian multiplier
 gradunwarp-INFO: Unwarping slice by slice
 gradunwarp-INFO: Writing output to trilinear.nii.gz
 Image Exception : #22 :: ERROR: Could not open image 
 /home/pipeline/onrc/data/pipeline/S5452FGE/4/HCPStructural/analysis/T1w/T1w1_GradientDistortionUnwarp/fullWarp_abs
 terminate called after throwing an instance of 'RBD_COMMON::BaseException'
 /opt/HCP/HCP/global/scripts/GradientDistortionUnwarp.sh: line 92: 11237 
 Aborted (core dumped) ${FSLDIR}/bin/convertwarp --abs 
 --ref=$WD/trilinear.nii.gz --warp1=$WD/fullWarp_abs.nii.gz --relout 
 --out=$OutputTransform
  
 And the output from the log file:
 START: GradientDistortionUnwarp
 . . . . . . . . . 10 . . . . . . . . . 20 . . . . . . . . . 30 . . . . . . . 
 . . 40 . . . . . . . . . 50 . . . . . . . . . 60 . . . . . . . . . 70 . . . . 
 . . . . . 80 . . . . . . . . . 90 . . . . . . . . . 100 . . . . . . . . . 110 
 . . . . . . . . . 120 . . . . . . . . . 130 . . . . . . . . . 140 . . . . . . 
 . . . 150 . . . . . . . . . 160 . . . . . . . . . 170 . . . . . . . . . 180 . 
 . . . . . . . . 190 . . . . . . . . . 200 . . . . . . . . . 210 . . . . . . . 
 . . 220 . . . . . . . . . 230 . . . . . . . . . 240 . . . . . . . . . 250 . . 
 . . . . . . . 260 . . . . . . . . . 270 . . . . . . . . . 280 . . . . . . . . 
 . 290 . . . . . . . . . 300 . . . . . . . . . 310 . . . . . . . . . 320
 set -- --path=/home/pipeline/onrc/data/pipeline/S5452FGE/4/HCPStructural  
  --subject=analysis   
 

Re: [HCP-Users] gradient unwarp in preFS script

2014-10-16 Thread Stephen Smith
Hi

On 16 Oct 2014, at 16:38, Book, Gregory gregory.b...@hhchealth.org wrote:
 Thanks for the info. I’m using the .grad file directly copied from our 
 scanner. Is this the correct file to use?

yes

  I also heard (from a site using a Trio), that the gradient nonlinearity 
 correction was only necessary on the original HCP Skyras, before Siemens 
 fixed the gradient locations on later scanners so as not to be a problem. But 
 I imagine that may just be rumor.

That doesn't sound very accurate AFIK  - but I don't know much about that - 
most likely the main factor here is just that the HCP-Skyra has totally 
different gradient coil and configuration than standard Skyras.

Cheers.





  
 -G
  
 From: Stephen Smith [mailto:st...@fmrib.ox.ac.uk] 
 Sent: Thursday, October 16, 2014 11:33 AM
 To: Harms, Michael
 Cc: Book, Gregory; hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] gradient unwarp in preFS script
  
 Hi guys - if the brain is centred on isocentre the gradwarp on a Skyra should 
 be about 1-2mm max within the brain.
 Cheers, Steve.
  
  
  
  
 On 16 Oct 2014, at 16:16, Harms, Michael mha...@wustl.edu wrote:
 
 
  
 In that case, I don't know what the degree of gradient nonlinearity is for a 
 stock Skyra.  But I'd try to get the gradient nonlinearity correction working 
 nonetheless.  You might have to do some more sleuthing to narrow down where 
 the error is arising before someone here can attempt to assist you.
  
 Namely, which image is this referring to?
 Image Exception : #22 :: ERROR: Could not open image 
  
  
 cheers,
 -MH
  
 -- 
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave.  Tel: 314-747-6173
 St. Louis, MO  63110  Email: mha...@wustl.edu
  
 From: Book, Gregory gregory.b...@hhchealth.org
 Date: Thursday, October 16, 2014 10:11 AM
 To: Harms, Michael mha...@wustl.edu, hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org
 Subject: RE: [HCP-Users] gradient unwarp in preFS script
  
 I’m processing our own raw data, collected on our stock Skyra. Our data was 
 collected using the HCP sequence recommendation.
 -G
  
 From: Harms, Michael [mailto:mha...@wustl.edu] 
 Sent: Thursday, October 16, 2014 11:08 AM
 To: Book, Gregory; hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] gradient unwarp in preFS script
  
  
 Hi Greg,
 Are you trying to re-process the HCP unprocessed data through the HCP 
 pipelines?  If so, the grad coefficients from a stock Skyra scanner will 
 not be the correct ones for the HCP Connectom scanner (which has a completely 
 different, customized gradient set, from a stock Skyra).
  
 In general, we recommend applying the gradient unwarping, as it can only help 
 bring everything into true alignment.  That said, the gradient nonlinearities 
 are a particular issue for the Connectom scanner, and would be less of an 
 issue for say a Trio.
  
 cheers,
 -MH
  
 -- 
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave. Tel: 314-747-6173
 St. Louis, MO  63110 Email: mha...@wustl.edu
  
 From: Book, Gregory gregory.b...@hhchealth.org
 Date: Thursday, October 16, 2014 10:00 AM
 To: hcp-users@humanconnectome.org hcp-users@humanconnectome.org
 Subject: [HCP-Users] gradient unwarp in preFS script
  
 I’m continuing to fine-tune our preFS script for our data, and I’m running 
 into trouble with the gradient unwarping step. I’ve obtained the .grad file 
 from our Skyra scanner and running the HCP with it. It runs the gradient 
 unwarp step, but seems to fail right after. If I set the 
 GradientDistortionCoeffs variable in the script to “NONE”, the entire script 
 runs without error.
  
 I’m also wondering if this gradient unwarping step is necessary?
  
  
  
 Here is the output from the console:
  
 gradunwarp-INFO: Parsing /opt/HCP/coeff_AS098.grad for harmonics coeffs
 gradunwarp-INFO: Evaluating spherical harmonics
 gradunwarp-INFO: on a 60^3 grid
 gradunwarp-INFO: with extents -300.0mm to 300.0mm
 gradunwarp-INFO: along x...
 gradunwarp-INFO: along y...
 gradunwarp-INFO: along z...
 gradunwarp-INFO: Evaluating the jacobian multiplier
 gradunwarp-INFO: Unwarping slice by slice
 gradunwarp-INFO: Writing output to trilinear.nii.gz
 Image Exception : #22 :: ERROR: Could not open image 
 /home/pipeline/onrc/data/pipeline/S5452FGE/4/HCPStructural/analysis/T1w/T1w1_GradientDistortionUnwarp/fullWarp_abs
 terminate called after throwing an instance of 'RBD_COMMON::BaseException'
 /opt/HCP/HCP/global/scripts/GradientDistortionUnwarp.sh: line 92: 11237 
 Aborted (core dumped) ${FSLDIR}/bin/convertwarp --abs 
 --ref=$WD/trilinear.nii.gz --warp1=$WD/fullWarp_abs.nii.gz --relout

Re: [HCP-Users] Coordinates for node timeseries

2014-11-02 Thread Stephen Smith
Hi - the nodes correspond to the parcels in the associated ICA decompositions 
that are also part of this distribution.
Cheers.



On 2 Nov 2014, at 10:39, David Hofmann davidhofma...@gmail.com wrote:

 Hi,
 
 I downloaded the subject-specific node timeseries (txt-files), but could not 
 find any information about to which coordinates/ROIs the different 
 segmentations (25, 50, 100 etc.) belong to respectively. For example, how can 
 I find out which timeseries from a subject involve the DMN?
 
 Thanks in advance
 
 David
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Incorporating physiological monitoring data

2015-02-16 Thread Stephen Smith
urgh - yes you're right.



 On 16 Feb 2015, at 16:47, Glasser, Matthew glass...@wusm.wustl.edu wrote:
 
 Also to Steve: if these are used after ICA+FIX, don’t they need to have the 
 24 motion parameters and noise ICA component timeseries confounds regressed 
 out as well, before being used on the cleaned data?
 
 Peace,
 
 Matt.
 
 From: Harms, Michael mha...@wustl.edu mailto:mha...@wustl.edu
 Date: Monday, February 16, 2015 at 10:42 AM
 To: Matt Glasser glass...@wusm.wustl.edu mailto:glass...@wusm.wustl.edu, 
 Stephen Smith st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk, Miriam 
 Klein-Flügge miriam.klein-flu...@psy.ox.ac.uk 
 mailto:miriam.klein-flu...@psy.ox.ac.uk
 Cc: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] Incorporating physiological monitoring data
 
 
 Not only that, but just because a given physiological trace exists, doesn't 
 necessarily mean that it is a *good* trace.  There is going to be 
 considerable variability in the quality of the physiological measurements, 
 which presents a challenge in using them in a large scale study.  I'm sure 
 that Greg will comment more when he has a chance.
 
 cheers,
 -MH
 
 -- 
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave.  Tel: 314-747-6173
 St. Louis, MO  63110  Email: mha...@wustl.edu mailto:mha...@wustl.edu
 
 From: Glasser, Matt Glasser glass...@wusm.wustl.edu 
 mailto:glass...@wusm.wustl.edu
 Date: Monday, February 16, 2015 10:10 AM
 To: Stephen Smith st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk, 
 Miriam Klein-Flügge miriam.klein-flu...@psy.ox.ac.uk 
 mailto:miriam.klein-flu...@psy.ox.ac.uk
 Cc: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] Incorporating physiological monitoring data
 
 Note that a major reason we didn’t use these physiological confound 
 regressors was they don’t exist for every subject, so be sure to select a 
 subset of subjects that have them.  We’d also be interested to know if you 
 found they were helpful.
 
 Peace,
 
 Matt.
 
 From: Stephen Smith st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk
 Date: Monday, February 16, 2015 at 8:05 AM
 To: Miriam Klein-Flügge miriam.klein-flu...@psy.ox.ac.uk 
 mailto:miriam.klein-flu...@psy.ox.ac.uk
 Cc: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] Incorporating physiological monitoring data
 
 Hi - I think you would probably be best off taking the FIX-cleaned version of 
 the data, and then apply additional confound regressors if they will help.  
 Don't forget to apply the same highpass filter (to those regressors) that was 
 applied already in the data preproc, before you use them.
 
 Cheers.
 
 
 
 On 16 Feb 2015, at 13:35, Miriam Klein-Flügge 
 miriam.klein-flu...@psy.ox.ac.uk mailto:miriam.klein-flu...@psy.ox.ac.uk 
 wrote:
 
 Dear all,
 Is it correct that until now the physiological monitoring data is not made 
 use of in the preprocessing of the rfMRI data? I would like to correct for 
 cardiac and respiratory signals and wondered how to best do that. I can see 
 that FIX-denoising probably takes care of this type of noise but I am 
 particularly interested in looking at the brain stem and in my experience, 
 including physiological regressors in the preprocessing makes a big 
 difference there.
 Would you recommend using the spatially (minimally) pre-processed rfMRI 
 data, performing the high-pass filtering on it myself and then incorporating 
 the physiological regressors at that stage, or is there a better stage at 
 which to do it? Also, is there a standard procedure for regressing out the 
 physiological regressors that you can recommend?
 Many thanks in advance!
 Kind regards,
 Miriam
 -- 
 Miriam Klein-Flügge
 Sir Henry Wellcome Postdoctoral Fellow
 Department of Experimental Psychology
 University of Oxford
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 ---
 Stephen M. Smith, Professor of Biomedical Engineering
 Associate Director,  Oxford University FMRIB Centre
 
 FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
 +44 (0) 1865 222726  (fax 222717)
 st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk
 http://www.fmrib.ox.ac.uk/~steve http://www.fmrib.ox.ac.uk/~steve

Re: [HCP-Users] Incorporating physiological monitoring data

2015-02-16 Thread Stephen Smith
Hi - I think you would probably be best off taking the FIX-cleaned version of 
the data, and then apply additional confound regressors if they will help.  
Don't forget to apply the same highpass filter (to those regressors) that was 
applied already in the data preproc, before you use them.

Cheers.



 On 16 Feb 2015, at 13:35, Miriam Klein-Flügge 
 miriam.klein-flu...@psy.ox.ac.uk wrote:
 
 Dear all,
 
 Is it correct that until now the physiological monitoring data is not made 
 use of in the preprocessing of the rfMRI data? I would like to correct for 
 cardiac and respiratory signals and wondered how to best do that. I can see 
 that FIX-denoising probably takes care of this type of noise but I am 
 particularly interested in looking at the brain stem and in my experience, 
 including physiological regressors in the preprocessing makes a big 
 difference there.
 
 Would you recommend using the spatially (minimally) pre-processed rfMRI data, 
 performing the high-pass filtering on it myself and then incorporating the 
 physiological regressors at that stage, or is there a better stage at which 
 to do it? Also, is there a standard procedure for regressing out the 
 physiological regressors that you can recommend?
 
 Many thanks in advance!
 
 Kind regards,
 
 Miriam
 
 -- 
 Miriam Klein-Flügge
 Sir Henry Wellcome Postdoctoral Fellow
 Department of Experimental Psychology
 University of Oxford
 
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] netmats in HCP

2015-03-30 Thread Stephen Smith
yup - that also affects the scaling - see FSLNets doc and the PTN release doc.
Cheers.


 On 30 Mar 2015, at 18:17, Glasser, Matthew glass...@wusm.wustl.edu wrote:
 
 Hi Steve,
 
 What about the correction for temporal autocorrelation?
 
 Matt.
 
 From: Stephen Smith st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk
 Date: Monday, March 30, 2015 at 12:15 PM
 To: Yizhou Ma maxxx...@umn.edu mailto:maxxx...@umn.edu
 Cc: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] netmats in HCP
 
 Hi - one thing is that we estimate (z versions of) netmats separately for 
 each 15min run and then average the 4 netmats to give a single netmat per 
 subject.
 Cheers.
 
 
 On 30 Mar 2015, at 18:14, Yizhou Ma maxxx...@umn.edu 
 mailto:maxxx...@umn.edu wrote:
 
 Hi Timothy,
 
 Thank you for pointing that out. I typed that wrong but I used the correct 
 transformation in matlab. As I said my transformed z scores are actually 
 almost perfectly linearly correlated with HCP netmats, though the latter is 
 much larger. I want to understand why the latter is larger and why the two 
 are not exactly correlated.
 
 Thanks,
 Cherry
 
 On Mon, Mar 30, 2015 at 12:08 PM, Timothy Coalson tsc...@mst.edu 
 mailto:tsc...@mst.edu wrote:
 A quick possibility: if you have pasted in the formula you used, I see an 
 order of operations problem: .5*ln(1+r)-ln(1-r) means (.5*ln(1+r))-ln(1-r), 
 where the usual formula is .5*ln((1+r)/(1-r)), which after some log 
 identities becomes .5*(ln(1+r)-ln(1-r)).
 
 Tim
 
 
 On Mon, Mar 30, 2015 at 11:02 AM, Yizhou Ma maxxx...@umn.edu 
 mailto:maxxx...@umn.edu wrote:
 Dear HCP experts,
 
 I am trying to reproduce the individual netmats from HCP500-PTN so that I 
 am sure where the numbers come from. I used individual node timeseries in 
 /ts2/subjID and did correlation in matlab. I then used fisher's z 
 transformation: .5*ln(1+r)-ln(1-r). The resulting netmat is different from 
 what is provided in *_netmat1/. However, they almost have a linear 
 relationship. It seems to me that HCP is not using the same z 
 transformation I have used. The transformation seems more like 
 7*ln(1+r)-ln(1-r).
 
 I have downloaded FSLNets but could not identify which function was used 
 to generate individual netmats in the first place. The example script 
 seems to be about group-level netmats only.
 
 Could you please share with me how exactly the numbers in individual 
 netmats were generated?
 
 Thanks,
 Cherry
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 ---
 Stephen M. Smith, Professor of Biomedical Engineering
 Associate Director,  Oxford University FMRIB Centre
 
 FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
 +44 (0) 1865 222726  (fax 222717)
 st...@fmrib.ox.ac.uk mailto:st...@fmrib.ox.ac.uk
 http://www.fmrib.ox.ac.uk/~steve http://www.fmrib.ox.ac.uk/~steve
 ---
 
 Stop the cultural destruction of Tibet http://smithinks.net/
 
 
 
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
  
 The materials in this message are private and may contain Protected 
 Healthcare Information or other information of a sensitive nature. If you are 
 not the intended recipient, be advised that any unauthorized use, disclosure, 
 copying or the taking of any action in reliance on the contents of this 
 information is strictly prohibited. If you have received this email in error, 
 please immediately notify the sender via telephone or return mail.


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman

Re: [HCP-Users] netmats in HCP

2015-03-30 Thread Stephen Smith
Hi - one thing is that we estimate (z versions of) netmats separately for each 
15min run and then average the 4 netmats to give a single netmat per subject.
Cheers.


 On 30 Mar 2015, at 18:14, Yizhou Ma maxxx...@umn.edu wrote:
 
 Hi Timothy,
 
 Thank you for pointing that out. I typed that wrong but I used the correct 
 transformation in matlab. As I said my transformed z scores are actually 
 almost perfectly linearly correlated with HCP netmats, though the latter is 
 much larger. I want to understand why the latter is larger and why the two 
 are not exactly correlated.
 
 Thanks,
 Cherry
 
 On Mon, Mar 30, 2015 at 12:08 PM, Timothy Coalson tsc...@mst.edu 
 mailto:tsc...@mst.edu wrote:
 A quick possibility: if you have pasted in the formula you used, I see an 
 order of operations problem: .5*ln(1+r)-ln(1-r) means (.5*ln(1+r))-ln(1-r), 
 where the usual formula is .5*ln((1+r)/(1-r)), which after some log 
 identities becomes .5*(ln(1+r)-ln(1-r)).
 
 Tim
 
 
 On Mon, Mar 30, 2015 at 11:02 AM, Yizhou Ma maxxx...@umn.edu 
 mailto:maxxx...@umn.edu wrote:
 Dear HCP experts,
 
 I am trying to reproduce the individual netmats from HCP500-PTN so that I am 
 sure where the numbers come from. I used individual node timeseries in 
 /ts2/subjID and did correlation in matlab. I then used fisher's z 
 transformation: .5*ln(1+r)-ln(1-r). The resulting netmat is different from 
 what is provided in *_netmat1/. However, they almost have a linear 
 relationship. It seems to me that HCP is not using the same z transformation 
 I have used. The transformation seems more like 7*ln(1+r)-ln(1-r).
 
 I have downloaded FSLNets but could not identify which function was used to 
 generate individual netmats in the first place. The example script seems to 
 be about group-level netmats only.
 
 Could you please share with me how exactly the numbers in individual netmats 
 were generated?
 
 Thanks,
 Cherry
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Questions about the extensively processed rfMRI data

2015-02-24 Thread Stephen Smith
Hi

 On 25 Feb 2015, at 03:39, Aaron C aaroncr...@outlook.com wrote:
 Dear HCP users,
 I have some questions about the extensively processed rfMRI data of 
 “parcellation-timeseries-netmats”. Their filenames are 
 “groupICA_3T_Q1-Q6related468_MSMsulc.tar.gz”, 
 “NodeTimeseries_3T_Q1-Q6related468_MSMsulc_ICAd*_ts*.tar.gz”, and 
 “netmats_3T_Q1-Q6related468_MSMsulc_ICAd*_ts*.tar.gz”.
  
 1. For nodes, is there any specific threshold applied to the ICA spatial map 
 to obtain the parcel?

No - thresholding was only used to generate the thumbnail overlays.

 I see from an old message of this mailing list that the thumbnail images were 
 obtained by thresholding the ICA spatial map at Z6. But how about the 
 threshold for obtaining that parcellation? Is that also Z6?
 2. These ICA spatial maps were obtained from the ICA+FIX preprocessed data, 
 therefore, given the dimensionality of 100 as an example, are all these 100 
 components non- artifactual?

Not necessarily; there can still be some artefactual processes left in the 
data, and out of (eg) 100 components there can possibly be a few artefactual 
ones.   We have not attempted to classify those group-level components in the 
PTN release.

 3. For these netmat *.pconn.nii files, are the first column corresponded to 
 the first ICA component shown in the thumbnail image “.png” in the folder 
 “melodic_IC_sum.sum”, the second column corresponded to the second thumbnail 
 image “0001.png” in the folder, and so on? 

Yes

 4. For these thumbnail PNG images, could I find any annotation regarding the 
 corresponding dominant brain region on the HCP website (e.g., something 
 similar to the annotations in Figure 4(A) of Dr. Smith et al.’s paper 
 http://www.ncbi.nlm.nih.gov/pubmed/24238796 
 http://www.ncbi.nlm.nih.gov/pubmed/24238796)?

Sorry - we haven't tried to annotate the PTN components.

 5. Have these data ever been global signal regressed?

No.  

Cheers, Steve.



  
 Many thanks!
  
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Group ICA with Melodic on 20 or more subjects

2015-01-29 Thread Stephen Smith
Thanks Matt

That's correct that the --migp option (see a recent post from me to the FSL 
list about this new option) inside melodic is still single-threaded.

However the journal paper (linked below by Matt) includes simple matlab code 
for doing MIGP and that can easily be parallelised as well as run 
multi-threaded.  The output can then be fed into melodic ICA.

Neither MIGP approach requires huge RAM - that's the point - 32GB should be 
enough.

Cheers.





 On 30 Jan 2015, at 03:32, Glasser, Matthew glass...@wusm.wustl.edu wrote:
 
 http://www.sciencedirect.com/science/article/pii/S105381191400634X 
 http://www.sciencedirect.com/science/article/pii/S105381191400634X
 
 We’re working on some improvements for it when used for functional 
 connectivity (i.e. all to all correlations), but it works fine as is for ICA. 
  
 
 Peace,
 
 Matt.
 
 From: Micah Chambers micahc...@gmail.com mailto:micahc...@gmail.com
 Date: Thursday, January 29, 2015 at 9:27 PM
 To: Matt Glasser glass...@wusm.wustl.edu mailto:glass...@wusm.wustl.edu
 Cc: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: Re: [HCP-Users] Group ICA with Melodic on 20 or more subjects
 
 Thanks Matthew, I'll check out the latest version of FSL. Is there a paper 
 the discusses the MIGP method?
 
 On Thu, Jan 29, 2015 at 7:24 PM, Micah Chambers micahc...@gmail.com 
 mailto:micahc...@gmail.com wrote:
 Well the resting state data provided through the HCP around 1GB each so I'm 
 guessing the memory requirement is a good bit more than 8GB now, unless the 
 PCA is performed per subject. Has anyone done Group ICA on datasets this 
 size?
 
 -Micah
 
 On Thu, Jan 29, 2015 at 6:40 PM, Glasser, Matthew glass...@wusm.wustl.edu 
 mailto:glass...@wusm.wustl.edu wrote:
 If you update to the latest version of FSL the MIGP algorithm may be in 
 there and is designed to help with this.  The main limitation with melodic 
 has been that it was single threaded.  I don’t know if that has been 
 addressed yet.
 
 Peace,
 
 Matt.
 
 From: Micah Chambers micahc...@gmail.com mailto:micahc...@gmail.com
 Date: Thursday, January 29, 2015 at 8:07 PM
 To: hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org 
 hcp-users@humanconnectome.org mailto:hcp-users@humanconnectome.org
 Subject: [HCP-Users] Group ICA with Melodic on 20 or more subjects
 
 Has anyone on the run group-wise analysis on the HCP resting state data, 
 and if so what tools did you use? I am having memory issues when running 
 more than 10 subjects and I was wondering if anyone has a way of getting 
 around the large memory requirements when concatenating in time.
 
 Thanks!
 
 -Micah
 
 -- 
 Micah Chambers, M.S.
 Ahmanson-Lovelace Brain Mapping Center
 University of California Los Angeles
 635 Charles E. Young Drive SW, Suite 225 
 Los Angeles, CA 90095-7334
 310-206-2101 tel:310-206-2101 (Office)
 310-206-5518 tel:310-206-5518 (Fax)
 703-307-7692 tel:703-307-7692 (Cell)
 email: mica...@ucla.edu mailto:mica...@ucla.edu
 
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org mailto:HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 
  
 The materials in this message are private and may contain Protected 
 Healthcare Information or other information of a sensitive nature. If you 
 are not the intended recipient, be advised that any unauthorized use, 
 disclosure, copying or the taking of any action in reliance on the contents 
 of this information is strictly prohibited. If you have received this email 
 in error, please immediately notify the sender via telephone or return mail.
 
 
 
 -- 
 Micah Chambers, M.S.
 Ahmanson-Lovelace Brain Mapping Center
 University of California Los Angeles
 635 Charles E. Young Drive SW, Suite 225 
 Los Angeles, CA 90095-7334
 310-206-2101 tel:310-206-2101 (Office)
 310-206-5518 tel:310-206-5518 (Fax)
 703-307-7692 tel:703-307-7692 (Cell)
 email: mica...@ucla.edu mailto:mica...@ucla.edu
 
 
 
 
 
 -- 
 Micah Chambers, M.S.
 Ahmanson-Lovelace Brain Mapping Center
 University of California Los Angeles
 635 Charles E. Young Drive SW, Suite 225 
 Los Angeles, CA 90095-7334
 310-206-2101 (Office)
 310-206-5518 (Fax)
 703-307-7692 (Cell)
 email: mica...@ucla.edu mailto:mica...@ucla.edu
 
 
 
  
 The materials in this message are private and may contain Protected 
 Healthcare Information or other information of a sensitive nature. If you are 
 not the intended recipient, be advised that any unauthorized use, disclosure, 
 copying or the taking of any action in reliance on the contents of this 
 information is strictly prohibited. If you have received this email in error, 
 please immediately notify the sender via telephone or return mail.
 ___
 HCP-Users mailing list
 

Re: [HCP-Users] Parcellated Connectome

2015-04-11 Thread Stephen Smith
Hi - there are many factors that affect overall scaling - more below:


 On 10 Apr 2015, at 14:22, Nomi, Jason jxn...@miami.edu wrote:
 
 Dear Experts,
 
 I have noticed that the time-series for individual subjects from the dual 
 regression output in the parcellated connectome (100 comp ICA) has a much 
 larger range than I am used to seeing.
 
 The range for time series values are approximately -800 to 800 while dual 
 regression outputs that I have conducted myself are usually around -5 to 5.  
 
 I also notice that I can set the threshold much higher for the independent 
 components when isolating activation compared to dual regression analyses 
 that I have done myself. This cleans up the component representation 
 substantially.  
 
 My questions are:
 
 1) Is there a particular reason for this large increase in ranges?
 

In this case most likely because we set the max of the group maps used in 
dualreg stage 1 to be 1. This causes output timeseries to have larger scaling - 
but the overall scaling is arbitrary anyway.

 
 2) Does the larger threshold for component activation have any influence on 
 the time series that is being produced?  Does the time series from the dual 
 regression output only represent the areas from the independent component 
 with the most intense activation?  I would like to ensure that my 
 presentation of component images using a much higher threshold is actually 
 representative of the time series that I am analyzing.  
 

Do you mean the group-ICA spatial maps or maps output by diualreg stage 2?

The group-ICA maps have high peaks (compared with the background scaling) for a 
couple of reasons:  a) because there are so many subjects being combined that  
the ICA components are strong, and b) the group-PCA reduction has removed a lot 
of unstructured noise before the PCA+ICA step.  But despite the maps having 
strong CNR, they are still valid maps.

Cheers, Steve.



 
 Thanks!
 
 Jason
 
 
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] ICA problem

2015-05-26 Thread Stephen Smith
Hi - what's in the report_log.html file?
Cheers



 On 26 May 2015, at 05:22, mo mo mo7278231r...@yahoo.com wrote:
 
 Hello every one
 
 
 I ran FSL MELODIC single session ica on connectome project Q1- single subject 
 preprocessed rest-fMRI data, but in the result MELODIC report page,the ICA 
 tab is empty. 
 
 I didn't change default options in fsl melodic except in pre-stats tab, i 
 disabled all the preprocessing options.Do i have to do preprocessing on 
 already preprocessed data?
  
 Since i am new in fMRI,i really appreciate it if you help me. 
  
 
 Thanks.
 
 Mohammad
 
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Data processing with SPM

2015-05-26 Thread Stephen Smith
Hi - I think you probably need to take this query to the SPM email list I'm 
afraid - we haven't used SPM so far in the HCP, so don't have experience with 
how it performs on HCP data.
Cheers, Steve.


 On 26 May 2015, at 04:01, 李勋 lixun2...@126.com wrote:
 
 Hi there, I've encountered a problem when dealing with preprocessed r-fMRI 
 HCP data by SPM8. To be specific, data on many time points can not be 
 recognized by SPM, and the MATLAB command window showed an error file too 
 small . Beside this, the last few frames of a run looked weird when viewing 
 by MRIcron.
 I have to say that all these problems did not exist on unpreprocessed r-FMRI 
 data.
  
 I am really confused right now, looking foward to your reply!
  
  
 Li Xun
  
  
  
 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users
 


---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
http://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet http://smithinks.net/





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] group-ICA spatial maps

2015-12-03 Thread Stephen Smith
Hi

> On 3 Dec 2015, at 04:24, Bagrat Amirikian  wrote:
> Hi,
> I have downloaded and unpacked HCP500_Parcellation_Timeseries_Netmats data. 
> My question is about group ICA spatial maps, specifically, about 
> melodic_IC.dscalar.nii and melodic_IC_ftb.dlabel.nii files in the groupICA* 
> subdirectories. There is a very short description of these files in the 
> accompanying PDF file:
>  
> melodic_IC.dscalar.niiICA spatial maps (unthresholded 
> Zstats); one "timepoint" per map. Grayordinates
> melodic_IC_ftb.dlabel.nii Summary "find the biggest" labels image 
> for all ICA spatial maps.  Grayordinates
>  
> I have loaded these files generated for 25-dimensional group-ICA to wb_view 
> with corresponding Q1-Q6_R440_midthikness surface files for their inspection. 
>  
> 1.   I can appreciate Zstats maps provided for each IC by 
> melodic_IC.dscalar.nii. How were exactly these Zstats computed? Are these 
> regular Z-scores obtained by subtracting the vertex/voxel-wise mean and 
> dividing by the standard deviation, or something else?
> 
Not quite - these are "z-stats" as output by MELODIC ICA tool in FSL (see 
Beckmann IEEE TMI 2004).  The simple answer is that the ICA maps are fed into a 
mixture model to normalise the central/null part of the distribution across 
voxels.
> 2.   When I click on the map and Information Window pops up, it shows two 
> SCALAR values for the selected vertex. For example:
> 
>  VERTEX CortexLeft: 28165
> ANATOMICAL XYZ: -6.28827, 59.2204, -0.919904
> CIFTI SCALARS melodic_IC.dscalar.nii: 0.935668 -0.20834
> 
> What do these two numbers 0.935668 and -0.20834 actually mean? I though there 
> should be just one number corresponding to the Zstat of the vertex.
> 
I don't know - when I try this I only get a single number.  Do you have 
multiple similarly-named images loaded maybe?
> 3.   It appears that melodic_IC_ftb.dlabel.nii provides a hard 
> non-overlapping parcellation and labels each parcel (which could consist of 
> several spatially separated regions) by a color. The number of parcels 
> corresponds to the ICA dimension, 25 in this case.  How is this parcellation 
> obtained? How is the set of spatially continuous/diffusive maps of individual 
> ICs give rise to this parcellation with sharp non-overlapping borders and 
> without any gaps between the neighboring parcels?

"ftb" - "find the biggest" - at each grayordinate the index number of the ICA 
component map having the highest value there is used.

Cheers, Steve.




>  
> Thank you very much for your help.
>  
> Bagrat
>  
> Bagrat Amirikian, Ph.D.
> Department of Neuroscience
> University of Minnesota Medical School
>  
> Brain Sciences Center
> Minneapolis Veterans Affairs Health Care System
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX-denoised

2015-12-10 Thread Stephen Smith
Hi all

I think Greg was making the valid point that:  if you want to regress the full 
space of A and B out of your data, it is not correct to regress out A and then 
afterwards regress out B, unless A and B are orthogonal.  To do it correctly 
you either need to combine [A B] into a single model to regress out of the 
data, or else regress A out of the data and also out of B, and then regress the 
new B out of the data.

Hope that helps,
Steve.


> On 10 Dec 2015, at 08:46, Maarten Mennes  wrote:
> 
> Hi Matt,
> 
> I'm not sure I follow your reasoning under 1). I would prefer a much richer 
> 'signal vs. noise' model as provided by ICA+FIX compared to aggressive 
> regression of (potentially GM contaminated) WM and CSF signal. This is also 
> what you indicate at the end of your answer: regressing out WM and CSF after 
> ICA+FIX doesn't seem to make much of a difference.
> 
> With 3) I believe Greg is pointing to combining all regressors into one 
> model, ie. doing non-aggressive motion, WM, and CSF regression, by adding 
> those regressors to the ICA+FIX generated model. There I again don't see what 
> the benefit would be of adding very crude WM/CSF regressors over the more 
> fine-grained ones ICA+FIX is providing. But it seems you are working on a 
> manuscript that addresses some of these points. Looking forward to that then!
> 
> Cheers,
> Maarten
> 
> 
> 
> On Thu, Dec 10, 2015 at 3:39 AM, Glasser, Matthew  > wrote:
> 1) Greg I covered this in my comments on your manuscript.  For the benefit
> of others: it is not the case that ICA+FIX would replace using WM and CSF
> timecourses.  The reason is that ICA+FIX regresses noise components out of
> the data in a ³non aggressive² approach (computing betas for all
> components and then subtracting only the data explained by the noise
> components).  This is different from how the movement parameters are
> removed (where all of the variance explained by those timeseries is
> aggressively regressed out).  If you aggressively regress out WM and CSF
> timecourses, this will remove all variance explained by them, even that
> which is shared by the signal components and would not be removed in
> ICA+FIX.  It is important that your WM and CSF regressors are kept well
> away from the grey matter, as you do not want to be doing GSR ³light² (or
> any GSR/MGTR for that matter).  I have investigated whether removing WM or
> CSF regressors is helpful.  For WM, it really doesn¹t make much of a
> difference unless there are MR acquisition artifacts (e.g. some receive
> coil elements stopped working during the scan).  One of the reasons we
> know there aren¹t global movement related effects being left in the HCP
> data is that regressing WM out doesn¹t do much (if there were, one would
> expect them to be picked up by the WM timeseries).  CSF may be more
> correlated with physiological noise, but it isn¹t a very clean regressor
> for this (nor is WM).
> 
> 2) Indeed there does appear to be global grey matter physiological noise
> which we need to separate from global neural signal.  We are working on
> ways to do this (it is likely that we will want to investigate both
> external approaches like regressing physiological regressors out and
> internal approaches that attempt to separate physiological noise from the
> data itself).  It would certainly be helpful if the HCP¹s physiological
> noise regressors were preprocessed for heart rate and respiratory end
> tidal volume traces and those regressors were released publicly.
> 
> 3) I don¹t understand this.
> 
> Peace,
> 
> Matt.
> 
> On 12/9/15, 5:39 PM, "hcp-users-boun...@humanconnectome.org 
>  on behalf of
> Greg Burgess"   on behalf of
> gcburg...@gmail.com > wrote:
> 
> >Just a few comments:
> >
> >1) I agree with Maarten that FIX-denoising is effectively removing WM and
> >CSF components from the FIX timeseries data. My understanding is that
> >Matt Glasser has evaluated the incremental benefit for regressing the
> >(average) WM and CSF timeseries and concluded that there is no additional
> >benefit after FIX. I don¹t believe that anyone has tested whether
> >additional WM and CSF components (such as implemented in CompCor) can
> >remove noise variance above and beyond FIX, though, in theory, FIX should
> >capture those as well.
> >
> >2) Some analyses that I have conducted suggest that FIX denoising may
> >leave behind some proportion of physiological noise, especially that
> >which is more globally-distributed across gray matter. I am hoping to
> >investigate whether physiological regressors can remove that additional
> >proportion.
> >
> >3) It is probably important to regress physiological regressors, motion
> >regressors, and FIX noise regressors simultaneously from the 

Re: [HCP-Users] Phase Encoding left-to-right and right-to-left

2015-11-25 Thread Stephen Smith
Hi Michael - for raw covariances - summing covariances is equivalent to temp 
concat first.
Cheers



> On 25 Nov 2015, at 17:29, Harms, Michael <mha...@wustl.edu> wrote:
> 
> 
> To Steve's point and the issue of memory:  A critical distinction is whether 
> you are intending to work with dense connectomes or parcellated connectomes.  
> In the context of parcellated connectomes, both Steve and myself have found a 
> small advantage in reproducibility if you compute a parcellated "netmat" for 
> each resting state run, convert those using r-to-z, and then average those 
> across the 4 resting state runs for a subject (if you want as output a single 
> parcellated netmat per subject).  In fact, that is how the netmats that are 
> distributed as part of the "PTN" from ConnectomeDB were themselves created.
> 
> In the context of dense connectomes, generating a dense connectome per run is 
> a different sort of beast.  You can do it (I've done it) using 
> -cifti-correlation and then average with -cifti-average. But to my knowledge, 
> no one has looked at whether there is a small reproducibility advantage to 
> that approach as well with dense connectomes, which is why I think that most 
> subject specific dense connectomes have probably been created via the 
> 'concat' approach outlined on the wiki.
> 
> cheers,
> -MH
> 
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu
> 
> From: Timothy Coalson <tsc...@mst.edu <mailto:tsc...@mst.edu>>
> Date: Tuesday, November 24, 2015 4:23 PM
> To: Greg Burgess <gcburg...@gmail.com <mailto:gcburg...@gmail.com>>
> Cc: "Elam, Jennifer" <e...@wustl.edu <mailto:e...@wustl.edu>>, 
> "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
> 
> If you are using -cifti-correlation, there is a -mem-limit option for this 
> purpose, so there isn't a required minimum memory to do it (even in matlab 
> where everything has to be in memory, the 90k x 4.8k timeseries input pales 
> in comparison to the 90k x 90k output).  If you are doing everything in 
> matlab, then the averaging of two 90k x 90k dconns is going to require more 
> memory than any reasonable concatenated correlation.
> 
> The -cifti-average command should use almost no memory regardless of file 
> size, as long as you don't overwrite one of the inputs with the output.
> 
> Tim
> 
> 
> On Tue, Nov 24, 2015 at 2:23 PM, Greg Burgess <gcburg...@gmail.com 
> <mailto:gcburg...@gmail.com>> wrote:
>> I suggested below that Joelle could average Fisher’s z-transformed 
>> correlation coefficients (derived from each run within-subject), or treat 
>> the multiple runs as within-subjects repeated measures.
>> 
>> The idea was that computing correlations between timeseries with 4800 time 
>> points will take four times as much RAM as using only 1200 time points. For 
>> folks with limited RAM, averaging the correlation estimates may be a more 
>> feasible option.
>> 
>> --Greg
>> 
>> ____
>> Greg Burgess, Ph.D.
>> Staff Scientist, Human Connectome Project
>> Washington University School of Medicine
>> Department of Anatomy and Neurobiology
>> Phone: 314-362-7864 
>> Email: gburg...@wustl.edu <mailto:gburg...@wustl.edu>
>> 
>> > On Nov 24, 2015, at 1:08 PM, Stephen Smith <st...@fmrib.ox.ac.uk 
>> > <mailto:st...@fmrib.ox.ac.uk>> wrote:
>> >
>> > I think maybe we need to be explicit about exactly what we're talking 
>> > about averaging?
>> > Cheers.
>> >
>> > 
>> > Stephen M. Smith,  Professor of Biomedical Engineering
>> > Head of Analysis,   Oxford University FMRIB Centre
>> >
>> > FMRIB, JR Hospital, Headington,
>> > Oxford. OX3 9 DU, UK
>> > +44 (0) 1865 222726 <tel:%2B44%20%280%29%201865%20222726>  (fax 222717)
>> > st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>
>> > http://www.fmrib.ox.ac.uk/~steve <http://www.fmrib.ox.ac.uk/~steve>
>> > --
>> >
>> >> On 24 Nov 2015, at 19:03, Greg Burgess <gcburg...@g

Re: [HCP-Users] phase encoding and group ICA

2015-11-24 Thread Stephen Smith
Hi - 

Wrt group-ICA, it makes no difference what order the data was fed into the MIGP 
group-PCA.

Wrt the 4 chunks combined into the node timeseries in the 500-subject PTN 
release:  the order is always the following, so you will need to take into 
account the information you pasted below if you want to know about how this 
interacts with session orderings.

ff{1}=sprintf('%s/%d/RESOURCES/rfMRI_REST1_LR_FIX/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{2}=sprintf('%s/%d/RESOURCES/rfMRI_REST1_RL_FIX/rfMRI_REST1_RL/rfMRI_REST1_RL_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{3}=sprintf('%s/%d/RESOURCES/rfMRI_REST2_LR_FIX/rfMRI_REST2_LR/rfMRI_REST2_LR_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);
ff{4}=sprintf('%s/%d/RESOURCES/rfMRI_REST2_RL_FIX/rfMRI_REST2_RL/rfMRI_REST2_RL_Atlas_hp2000_clean.dtseries.nii',SUBJECTS,subID);

Cheers, Steve.

ps - for the 900 PTN release we will be releasing the code used to generate it.




> On 25 Nov 2015, at 01:17, Mary Beth  wrote:
> 
> The subject-specific parcel timeseries were provided in a single text file 
> for each model order with 4800 time points each. I am trying to figure out if 
> time points 2401-3600 from those text files always correspond to the LR 
> session from Day 2, or if they correspond to the RL session from Day 2 for 
> the handful of subjects collected before  1 October 2012 and the LR session 
> from Day 2 for everyone else. I just want to make sure that I label 
> everything correctly.
> 
> I hope this clarifies my question. Thanks again for your help.
> 
> -mb
> 
> 
> On Tue, Nov 24, 2015, 19:29 Glasser, Matthew  > wrote:
> I’m still not following why it matters.
> 
> Peace,
> 
> Matt.
> 
> From: Mary Beth >
> Date: Tuesday, November 24, 2015 at 6:18 PM
> To: Matt Glasser >, 
> "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] phase encoding and group ICA
> 
> I just want to make sure I'm comparing apples to apples when I use the 
> subject-specific timeseries. 
> 
> best,
> mb
> 
> On Tue, Nov 24, 2015 at 7:11 PM Glasser, Matthew  > wrote:
> Why does the order of the individual subject sessions matter for group ICA or 
> PTNs?
> 
> Peace,
> 
> Matt.
> 
> From:  > on behalf of Mary Beth 
> >
> Date: Tuesday, November 24, 2015 at 5:59 PM
> To: "hcp-users@humanconnectome.org " 
> >
> Subject: [HCP-Users] phase encoding and group ICA
> 
> Hi all,
> 
> According to this page 
> http://www.humanconnectome.org/documentation/Q1/data-in-this-release.html 
> , 
> prior to 1 October 2012, the first resting state session of each visit was 
> acquired with RL phase encoding, and the second session was acquired with LR 
> phase encoding (RL/LR).  After this date, the first visit continued to be 
> acquired in the RL/LR order, but the second visit was acquired in the 
> opposite order, with the LR acquisition followed by the RL acquisition 
> (LR/RL). 
> 
> Here's my question: prior to group ICA and the generation of subject-specific 
> sets of node timeseries in the August, 2014 “HCP500-PTN” 
> (Parcellation+Timeseries+Netmats) data release, were the sessions for 
> subjects acquired before 1 October 2012 reordered to match the order of 
> sessions for subjects acquired after 1 October 2012?
> 
> Thanks in advance,
> Mary Beth
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 
> 
>  
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
> 
> 
>  
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly 

Re: [HCP-Users] Phase Encoding left-to-right and right-to-left

2015-11-24 Thread Stephen Smith
I think maybe we need to be explicit about exactly what we're talking about 
averaging?
Cheers. 


Stephen M. Smith,  Professor of Biomedical Engineering
Head of Analysis,   Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington,
Oxford. OX3 9 DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.uk
http://www.fmrib.ox.ac.uk/~steve
--

> On 24 Nov 2015, at 19:03, Greg Burgess  wrote:
> 
> It’s less RAM-intensive since you only need to load one timeseries at a time.
> 
> --Greg
> 
> 
> Greg Burgess, Ph.D.
> Staff Scientist, Human Connectome Project
> Washington University School of Medicine
> Department of Anatomy and Neurobiology
> Phone: 314-362-7864
> Email: gburg...@wustl.edu
> 
>> On Nov 24, 2015, at 12:29 PM, Glasser, Matthew  wrote:
>> 
>> I'm not sure what benefit you'd get from averaging the FCs across runs 
>> within a subject.  That just sounds more computationally intensive.
>> 
>> Peace,
>> 
>> Matt.
>> 
>> 
>> From: Joelle Zimmermann 
>> Sent: Tuesday, November 24, 2015 11:55 AM
>> To: Glasser, Matthew
>> Cc: Greg Burgess; Elam, Jennifer; hcp-users@humanconnectome.org
>> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
>> 
>> Hi Matt,
>> 
>> Glad you do point that out, because I was previously looking at the Resting 
>> State fMRI 1 Preprocessed, but the Resting State fMRI FIX-Denoised (Compact) 
>> is readily available. So I guess for that all I'll need to do is demean and 
>> variance normalize, and/or average the two FCs.
>> 
>> Thanks,
>> Joelle
>> 
>> On Tue, Nov 24, 2015 at 12:19 PM, Glasser, Matthew  
>> wrote:
>> Indeed I was assuming you were using FIX cleaned data.  I wouldn't recommend 
>> not using FIX cleaned data unless you are testing other clean up approaches.
>> 
>> Peace,
>> 
>> Matt.
>> 
>> From: Joelle Zimmermann [joelle.t.zimmerm...@gmail.com]
>> Sent: Tuesday, November 24, 2015 11:14 AM
>> To: Greg Burgess
>> Cc: Glasser, Matthew; Elam, Jennifer; hcp-users@humanconnectome.org
>> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
>> 
>> Hi Greg,
>> 
>> Thanks for your response. Indeed, I was considering that myself, to compute 
>> the FCs separately and average the LR and RL.
>> 
>> Thanks,
>> Joelle
>> 
>> On Tue, Nov 24, 2015 at 11:56 AM, Greg Burgess 
>> > wrote:
>> Hi Joelle,
>> 
>> In addition to demeaning and possibly variance normalization, it is probably 
>> a good idea to detrend each run separately using a linear detrend or a high 
>> pass filter before concatenation. (FIX-preprocessed data already includes a 
>> 2000s high pass filter.)
>> 
>> Another option that is not described on the wiki (yet) is to compute 
>> correlations separately for each run, and then average the Fisher’s 
>> z-transformed correlation coefficients, or treat the multiple runs as 
>> within-subjects repeated measures.
>> 
>> --Greg
>> 
>> 
>> Greg Burgess, Ph.D.
>> Staff Scientist, Human Connectome Project
>> Washington University School of Medicine
>> Department of Anatomy and Neurobiology
>> Phone: 314-362-7864
>> Email: gburg...@wustl.edu
>> 
>>> On Nov 23, 2015, at 4:40 PM, Glasser, Matthew 
>>> > wrote:
>>> 
>>> It doesn’t matter what order you concatenate the data in, but I would not 
>>> recommend only analyzing the data of one phase encoding direction.
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> From: 
>>> >
>>>  on behalf of Joelle Zimmermann 
>>> >
>>> Date: Monday, November 23, 2015 at 12:48 PM
>>> To: "Elam, Jennifer" >
>>> Cc: "hcp-users@humanconnectome.org" 
>>> >
>>> Subject: Re: [HCP-Users] Phase Encoding left-to-right and right-to-left
>>> 
>>> Hi Jennifer and Matt,
>>> 
>>> Thanks for your help. I have a few clarification questions below:
>>> Does it matter in which order I concatenate the LR and the RL .nii's? My 
>>> ultimate goal is to create a functional connectivity matrix from the time 
>>> series.
>>> #3 in the link you sent describes that there are 4 runs per subject. Is 
>>> this the REST 1, and REST 2, each with LR and RL phase encoding directions?
>>> Would using only one phase encoding direction (i.e. do analysis on LR) 
>>> expect to effect the results?
>>> 
>>> Thanks,
>>> Joelle
>>> 
 On Mon, Nov 23, 2015 at 1:17 PM, Jennifer Elam 
 > 

Re: [HCP-Users] A question about R820 MSM-All registered dense connectome

2016-06-04 Thread Stephen Smith
Hi - we used the 820 subjects with the absolutely complete set of timepoints 
from all 4 runs, for the group-average dense connectome and group-ICA stuff.
Cheers


> On 3 Jun 2016, at 19:16, Aaron C  wrote:
> 
> I checked the number of subjects which completed all four resting-state 
> scans, but found this number was 832 instead of 820. Therefore, I cannot 
> figure out which subjects were used for generating R820 MSM-All registered 
> dense connectome of group-averaged functional connectivity file 
> “HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii”. Could anyone 
> please help me with my question? Thank you.
>  
> From: aaroncr...@outlook.com 
> To: hcp-users@humanconnectome.org 
> Subject: A question about R820 MSM-All registered dense connectome
> Date: Mon, 30 May 2016 21:03:33 -0400
> 
> Dear HCP experts,
> 
> Where could I find the subject ID number list of 820 subjects used for 
> generating R820 MSM-All registered dense connectome of group-averaged 
> functional connectivity “HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr”? 
> Thank you.
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] HCP S900 - PTN data release

2016-01-16 Thread Stephen Smith
Hi - I agree that is a bit confusing - actually the PDF is linked directly from 
the main 900 data release page (hover over the "?) - the link is
http://humanconnectome.org/documentation/S900/HCP900_GroupICA+NodeTS+Netmats_Summary_15dec2015.pdf
 

Cheers



> On 16 Jan 2016, at 08:32, Bagrat Amirikian  wrote:
> 
> Hi,
>  
> HCP S900 – PTN data release announcement:
>  
> http://humanconnectome.org/about/pressroom/project-news/announcing-release-of-s900-ptn-and-other-group-average-data/
>  
> 
>  
> claims that ‘Additional information is provided in the pdf 
> (HCP900_GroupICA+NodeTS+Netmats_Sumary_15dec2015.pdf) included in the 
> download.’  However, this PDF seems missing in the download package.
>  
> Where can we get this additional info?
>  
> Bagrat
>  
> Bagrat Amirikian, Ph.D.
>  
> Department of Neuroscience
> University of Minnesota Medical School
>  
>  Brain Sciences Center
> Minneapolis Veterans Affairs Health Care System
>  
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Converting FA maps to CIFTI

2016-01-31 Thread Stephen Smith
Hi - so you have estimated in FA, MD, etc in grey matter? 
Note that CIFTI doesn't cover white matter.
Cheers.



> On 1 Feb 2016, at 06:01, Georg Kerbler  wrote:
> 
> Hi,
> 
> I would like to convert FA/MD/AD/etc. images which I calculated in T1w space, 
> into CIFTI's, in order to correlate/overlay them with e.g. beta-maps from 
> functional tasks.
> 
> What is the best way to do this?
> 
> Thanks for your help,
> Best,
> Georg
> 
> P.s.: So far I used 'wb_command -cifti-convert -from-nifti  
>  '; however opening the resulting image in wb_view 
> doesn't display anything.
> (Prior to using the command I transformed the DTI images into MNInonlinear 
> space - by registering them to the T1 in the MNInonlinear folder and then 
> warping the images)
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] "good" vs. "bad" components in ICA-FIX denoised rsfMRI data

2016-03-09 Thread Stephen Smith
Hi

> On 9 Mar 2016, at 15:24, Harms, Michael <mha...@wustl.edu> wrote:
> 
> 
> Hi Steve,
> I’m not seeing a file by that name in the HCP FIX processing.

apologies - I was looking at an example folder from when we trained FIX 
that's the hand training output - which wasn't what was asked for here - sorry!

> 
> In the 900 subject release data, there should be files in the 
> rfMRI_REST?_{LR,RL}_hp2000.ica folders called Noise.txt and Signal.txt that 
> contain the classifications.  But those files are created as part of a 
> “PostFix” script, rather than the initial FIX script itself.  And, I don’t 
> think those particular files were created as part of the 500 subject release 
> packages.
> 
> However, if you are using 500 subject release data, you can find the noise 
> components in the rfMRI_REST?_{LR,RL}_hp2000.ica/.fix file (note that it is 
> dot fix, so unfortunately that info is a bit hidden in the 500 subject 
> release data).  The Signal components are then all the other components not 
> specified in the .fix file.
> 
> cheers,
> -MH
> 
> -- 
> Michael Harms, Ph.D.
> 
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.
> Tel: 314-747-6173
> St. Louis, MO  63110
> Email: mha...@wustl.edu
> 
> 
> 
> On 3/9/16, 2:10 AM, "hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of Stephen Smith" 
> <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of 
> st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>> wrote:
> 
> Hi - it's the same as in standard FIX usage - the file listing bad components 
> is called hand_labels_noise.txt - inside the ICA output folder.
> Cheers
> 
> 
>> On 9 Mar 2016, at 03:25, Ely, Benjamin <benjamin@mssm.edu 
>> <mailto:benjamin@mssm.edu>> wrote:
>> Hi all,
>> I’m interested in running some analyses (particularly smoothness 
>> estimations) on the noise components identified by ICA-FIX. I’ve looked 
>> through several subjects’ ICA folders (e.g., 
>> rfMRI_REST1_LR_hp2000.ica/filtered_func_data.ica/), which contain a number 
>> of output stats and nifti files, but I can’t find an indication of which 
>> components were classified as “good” and “bad”. Is this information 
>> available?
>> Thanks!
>> -Ely
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 
> ---
> Stephen M. Smith, Professor of Biomedical Engineering
> Head of Analysis,  Oxford University FMRIB Centre
> 
> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
> +44 (0) 1865 222726  (fax 222717)
> st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>
> http://www.fmrib.ox.ac.uk/~steve <http://www.fmrib.ox.ac.uk/~steve>
> ---
> 
> Stop the cultural destruction of Tibet
> 
> 
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 
> 
>  
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
<http://www.fmrib.ox.ac.uk/~steve>
---

Stop the cultural destruction of Tibet <http://smithinks.net/>






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Negative Voxel Values signed int 16 vs uint16 issue

2016-03-03 Thread Stephen Smith
Hi - yes it may depend on the context/pipeline but for simplicity and safety 
pretty much everything we do just works in floats.
Cheers

> On 3 Mar 2016, at 21:18, Harms, Michael  wrote:
> 
> 
> Maybe Steve or MJ can chime in regarding the FSL issues, but I do recall 
> there being some concern years ago whether ‘FSL’ was fully compliant with 
> UINT16 NIFTI’s.
> 
> Regardless, in terms of what we are currently using, it is FLOAT32.
> 
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu
> 
> From:  > on behalf of Timothy Coalson 
> >
> Date: Thursday, March 3, 2016 at 3:14 PM
> To: "Garrett T. McGrath"  >
> Cc: "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] Negative Voxel Values signed int 16 vs uint16 issue
> 
> NiFTI-1.1 does in fact support both signed and unsigned int16:
> 
> #define DT_INT16   4
> ...
> #define DT_UINT16512 /* unsigned short (16 bits) */
> 
> I don't know if this was the case in NiFTI-1.0, but NiFTI-1.1 has been around 
> so long it would be strange if tools didn't support it, but did support 1.0.  
> If dcm2nii or MRICron is treating them incorrectly, then you should contact 
> the tool's authors.  MRICron isn't reading it as analyze format, is it?  The 
> old analyze format doesn't support uint16.
> 
> I don't know whether topup cares what the input datatype is.  float32 can 
> exactly represent any value of int16 or uint16, so it seems unlikely that 
> conversion to float32 would cause problems (unless the data scaling and 
> offset do something unusual...).
> 
> Tim
> 
> 
> On Thu, Mar 3, 2016 at 2:42 PM, Garrett T. McGrath  > wrote:
>> We've had a set of users that have discovered an issue with their latest 
>> datasets that I'm hoping there might be a best practice for dealing with in 
>> regards to FSL.
>> 
>> I've received some specific context from the end users on what they are 
>> capturing:
>> Users are collecting single-band spin echo EPI images in opposing phase 
>> encode directions with the CMRR sequence (cmrr_mbep2d_se) on our 3T Prisma 
>> to provide means of correcting for susceptibility distortions via FSL’s 
>> ‘TOPUP’ tool. Their protocol parameters closely follow what is specified in 
>> the HCP Q3 Release Appendix I. The dicom output of this data is UINT16 which 
>> isn’t supported by nifti 1.0 format. Is there a recommended solution that 
>> would play nice with FSL (TOPUP in particular) ? With the dcm2nii conversion 
>> tool, we can either go ahead and set the data type to UINT16 or set it to 
>> float32, but is there a specific recommendation from your end?
>> 
>> I've verified the values using the MRICron tool to visually inspect values 
>> and they are infact overflowing into negative numbers so we have a few 
>> options: Force unit16 (out of standard), force Float32 (in standard but 
>> larger, slower, and brings along the baggage of float numbers), divide all 
>> intensities by 2 (lossy, I don't regard it as an acceptable solution for raw 
>> data), or fall back on dicom files (I'm not clear if this is supported by 
>> FSL).
>> 
>> Any suggestions or feedback would be appreciated.
>> 
>> Garrett McGrath
>> HPC and Research Computing
>> Princeton Neuroscience Institute, Princeton University
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org 
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 
> 
>  
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
> ___
> HCP-Users mailing list
> 

Re: [HCP-Users] "good" vs. "bad" components in ICA-FIX denoised rsfMRI data

2016-03-09 Thread Stephen Smith
Hi - it's the same as in standard FIX usage - the file listing bad components 
is called hand_labels_noise.txt - inside the ICA output folder.
Cheers


> On 9 Mar 2016, at 03:25, Ely, Benjamin  wrote:
> 
> Hi all,
> 
> I’m interested in running some analyses (particularly smoothness estimations) 
> on the noise components identified by ICA-FIX. I’ve looked through several 
> subjects’ ICA folders (e.g., 
> rfMRI_REST1_LR_hp2000.ica/filtered_func_data.ica/), which contain a number of 
> output stats and nifti files, but I can’t find an indication of which 
> components were classified as “good” and “bad”. Is this information available?
> 
> Thanks!
> -Ely
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve
---

Stop the cultural destruction of Tibet






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Extracting ROI data from HCP resting state data - 2400 data points instead of 1200 ?

2016-07-12 Thread Stephen Smith
Hi - no we do not (in general for resting-state) ever recommend temporal 
contatenation like this before further analyses - for the reason you're seeing 
here.
For example, for the HCP released netmats, we take the 4 runs, one at a time, 
estimate the 4 (zstat) netmats, and average those.
Cheers.




> On 12 Jul 2016, at 09:43, David Hofmann  wrote:
> 
> Hi Michael,
> 
> thanks for the reply, using a different routine works and shows 1200 volumes. 
> But now it seems that in some data (extracted ROI mean) there is a huge 
> difference between LR and RL phase encoding in the signal (see attached 
> picture). Is this "normal" and can I just concatenate LR and RL together or 
> is this not possible?
> 
> greetings
> 
> David 
> 
> 2016-07-11 19:43 GMT+02:00 Harms, Michael  >:
> 
> Hi,
> Can you check the number of volumes/frames of the unpacked 
> REST1_{LR,RL}.nii.gz files using something other than your Matlab/SPM tools?  
> e.g., FSL’s ‘fslhd’ or ‘fslnvols’ commands.
> 
> cheers,
> -MH
> 
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173 
> St. Louis, MO  63110  Email: mha...@wustl.edu 
> 
> From:  > on behalf of David Hofmann 
> >
> Date: Monday, July 11, 2016 at 3:15 AM
> To: "Dierker, Donna" >
> Cc: hcp-users  >
> Subject: Re: [HCP-Users] Extracting ROI data from HCP resting state data - 
> 2400 data points instead of 1200 ?
> 
> Hi Donna and others,
> 
> thanks for your answer. I'm facing a difficulty with extracting data from the 
> preprocessed files, that is they seems to each contain 2400 data points 
> rather than 1200 like described in the documentation. 
> 
> I downloaded the 10 subjects data set and used the following files: 
> subjectcode_3T_rfMRI_REST1_preproc.zip, from which I assume that these are 
> the preprocessed files.
> 
> It contains two datasets LR and RL: 
> 
> \MNINonLinear\Results\rfMRI_REST1_LR
> \MNINonLinear\Results\rfMRI_REST1_RL
> 
> I unpacked these files: 
> 
> rfMRI_REST1_LR.nii.gz
> rfMRI_REST1_RL.nii.gz
> 
> and read them as 4D NIFTI with Matlab and an SPM function. Afterwards they 
> each contain 2400 data points (dimension: 91 109 91 2400), but in the 
> documention it says they each should contain only 1200 data points. So I'm 
> not sure if I did something wrong.
> 
> greetings
> 
> David
> 
> 
> 2016-06-30 18:30 GMT+02:00 Dierker, Donna  >:
> Hi David,
> 
> I hope this publication answers your questions about HCP rfMRI preprocessing:
> 
> Resting-state fMRI in the Human Connectome Project.
> Smith SM1, Beckmann CF, Andersson J, Auerbach EJ, Bijsterbosch J, Douaud G, 
> Duff E, Feinberg DA, Griffanti L, Harms MP, Kelly M, Laumann T, Miller KL, 
> Moeller S, Petersen S, Power J, Salimi-Khorshidi G, Snyder AZ, Vu AT, 
> Woolrich MW, Xu J, Yacoub E, Uğurbil K, Van Essen DC, Glasser MF; WU-Minn HCP 
> Consortium.
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3720828/ 
> 
> 
> I am only used to seeing what it is in the fix extended packages, so I'm not 
> sure all these volumes are in the basic fix packages, but here are NIFTI 
> volumes in a sample subject's rfMRI subdirectories:
> 
> 177645/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_hp2000_clean.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_hp2000.ica/filtered_func_data.ica/mask.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_hp2000.ica/filtered_func_data.ica/melodic_IC.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_hp2000.ica/filtered_func_data.ica/melodic_oIC.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_RL/rfMRI_REST1_RL_hp2000_clean.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_RL/rfMRI_REST1_RL_hp2000.ica/filtered_func_data.ica/mask.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_RL/rfMRI_REST1_RL_hp2000.ica/filtered_func_data.ica/melodic_IC.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_RL/rfMRI_REST1_RL_hp2000.ica/filtered_func_data.ica/melodic_oIC.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST1_RL/rfMRI_REST1_RL.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST2_LR/rfMRI_REST2_LR_hp2000_clean.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST2_LR/rfMRI_REST2_LR_hp2000.ica/filtered_func_data.ica/mask.nii.gz
> 177645/MNINonLinear/Results/rfMRI_REST2_LR/rfMRI_REST2_LR_hp2000.ica/filtered_func_data.ica/melodic_IC.nii.gz
> 

Re: [HCP-Users] Exact Meaning of ICA maps in 800 subjects release

2016-07-14 Thread Stephen Smith
Hi  [comment also for Jenn below]

Sorry, we didn't calculate z-stat versions of the volumetric group-average 
maps.  The volumetric maps are really just intended as a useful visual 
reference - the CIFTI versions are the "real thing".

-

BTW - I think that you are not asking about subject-specific RSN maps - but for 
completeness: just the CIFTI z-stat versions of those are available online at 
HCP website.
Jenn - I've just noticed that the descriptions of these 3 recent packages are 
totally wrong - that's probably my fault sorry - please could you change all of 
the following text from:

The following links contain volumetric NIFTI versions of the CIFTI MSMall 
Group-ICA parcellations, with various ICA dimensionalities applied.

 Volumetric Parcellations for 10-, 25-, 50-, 100-dimensionalities (57GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d15_25_50_100.zip>
 Volumetric Parcellations for 200-dimensionality (55GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d200.zip>
 Volumetric Parcellations for 300-dimensionality (85GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d300.zip>
to being:

The following links contain subject-specific CIFTI maps: subject-specific 
versions of the group-ICA parcellations, with various ICA dimensionalities 
applied. These are z-statistic maps generated using dual-regression.

 CIFTI subject-specific Parcellations for 10-, 25-, 50-, 100-dimensionalities 
(57GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d15_25_50_100.zip>
 CIFTI subject-specific Parcellations for 200-dimensionality (55GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d200.zip>
 CIFTI subject-specific Parcellations for 300-dimensionality (85GB) 
<https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d300.zip>

Thanks!





> On 14 Jul 2016, at 06:49, Nicola Toschi <tos...@med.uniroma2.it> wrote:
> 
> Dear Prof. Smith,
> Dear List,
> 
> one more question: would the Z-maps related to the second-last step step 
> (dual regression to create volumetric representations if the IC) be available 
> (they are not in the public release)?
> 
> This would greatly aid in interpreting/threshold the volumetric betas.
> 
> Thanks a lot in advance,
> 
> Nicola
> 
> On 07/06/2016 10:09 AM, Stephen Smith wrote:
>> Hi
>> 
>> Group-ICA is carried out in grayordinate space. Then:
>> 
>> For each subject, subject-wise grayordinate AND volumetric maps (versions of 
>> the group-ICA) are estimated using dual-regression, with node-timeseries 
>> normalisation, and outputs in units of GLM betas (parameter estimates) not 
>> Z.   
>> 
>> These volumetric maps are then averaged across subjects to generate the 
>> group-average volumetric maps distributed from HCP.
>> 
>> So how you threshold these is up to you and depends what the intended usage 
>> for that is...
>> 
>> Cheers.
>> 
>> 
>> 
>>> On 6 Jul 2016, at 07:26, Nicola Toschi < 
>>> <mailto:tos...@med.uniroma2.it>tos...@med.uniroma2.it 
>>> <mailto:tos...@med.uniroma2.it>> wrote:
>>> 
>>> Hi, 
>>> 
>>> The volumetric version!
>>> 
>>> Thanks in advance, 
>>> 
>>> Nicola
>>> 
>>> 
>>>  Original message 
>>> From: Stephen Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>> 
>>> Date: 04/07/2016 12:37 PM (GMT+01:00) 
>>> To: Nicola Toschi <tos...@med.uniroma2.it <mailto:tos...@med.uniroma2.it>> 
>>> Cc: hcp-users <hcp-users@humanconnectome.org 
>>> <mailto:hcp-users@humanconnectome.org>> 
>>> Subject: Re: [HCP-Users] Exact Meaning of ICA maps in 800 subjects release 
>>> 
>>> HI - do you mean the grayordinate or volumetric versions?  
>>> Cheers
>>> 
>>> 
>>> 
>>>> On 4 Jul 2016, at 11:16, Nicola Toschi <tos...@med.uniroma2.it 
>>>> <mailto:tos...@med.uniroma2.it>> wrote:
>>>> 
>>>> Hi list, 
>>>> 
>>>> This may be more of a MELODIC question, but I would be grateful for any 
>>>> input:
>>>> 
>>>> In the ICA release on 800+ subjects, what is the exact 

Re: [HCP-Users] HCP script used for generating group-averaged dense connectome

2016-07-17 Thread Stephen Smith
Hi - it *should* be part of the scripts available as part of the big PTN 
download. 
( HCP900 Parcellation + Timeseries + Netmats (820 Subjects) )
 Let me know if it's not in there.
Cheers.



> On 17 Jul 2016, at 06:10, Aaron C  wrote:
> 
> Dear HCP experts,
> 
> Could I find the shell or MATLAB script used for generating group-averaged 
> dense connectome 
> ("HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii”) somewhere 
> online? Thank you.
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Exact Meaning of ICA maps in 800 subjects release

2016-07-06 Thread Stephen Smith
Hi

Group-ICA is carried out in grayordinate space. Then:

For each subject, subject-wise grayordinate AND volumetric maps (versions of 
the group-ICA) are estimated using dual-regression, with node-timeseries 
normalisation, and outputs in units of GLM betas (parameter estimates) not Z.   

These volumetric maps are then averaged across subjects to generate the 
group-average volumetric maps distributed from HCP.

So how you threshold these is up to you and depends what the intended usage for 
that is...

Cheers.



> On 6 Jul 2016, at 07:26, Nicola Toschi <tos...@med.uniroma2.it> wrote:
> 
> Hi, 
> 
> The volumetric version!
> 
> Thanks in advance, 
> 
> Nicola
> 
> 
> ---- Original message 
> From: Stephen Smith <st...@fmrib.ox.ac.uk> 
> Date: 04/07/2016 12:37 PM (GMT+01:00) 
> To: Nicola Toschi <tos...@med.uniroma2.it> 
> Cc: hcp-users <hcp-users@humanconnectome.org> 
> Subject: Re: [HCP-Users] Exact Meaning of ICA maps in 800 subjects release 
> 
> HI - do you mean the grayordinate or volumetric versions?  
> Cheers
> 
> 
> 
>> On 4 Jul 2016, at 11:16, Nicola Toschi <tos...@med.uniroma2.it 
>> <mailto:tos...@med.uniroma2.it>> wrote:
>> 
>> Hi list, 
>> 
>> This may be more of a MELODIC question, but I would be grateful for any 
>> input:
>> 
>> In the ICA release on 800+ subjects, what is the exact mathematical meaning 
>> / definition of the (non-binary) intensities of the different component 
>> volumes?
>> 
>> Are they easily relatable to a statistical map (I am looking to threshold 
>> them to study main clusters and possibly correct for multiple comparisons)?
>> 
>> Thanks in advance, 
>> 
>> Nicola
>> 
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> 
> 
> 
> ---
> Stephen M. Smith, Professor of Biomedical Engineering
> Head of Analysis,  Oxford University FMRIB Centre
> 
> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
> +44 (0) 1865 222726  (fax 222717)
> st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>
> http://www.fmrib.ox.ac.uk/~steve <http://www.fmrib.ox.ac.uk/~steve>
> ---
> 
> Stop the cultural destruction of Tibet <http://smithinks.net/>
> 
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 
<http://www.fmrib.ox.ac.uk/~steve>
---

Stop the cultural destruction of Tibet <http://smithinks.net/>






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Exact Meaning of ICA maps in 800 subjects release

2016-07-04 Thread Stephen Smith
HI - do you mean the grayordinate or volumetric versions?  
Cheers



> On 4 Jul 2016, at 11:16, Nicola Toschi  wrote:
> 
> Hi list, 
> 
> This may be more of a MELODIC question, but I would be grateful for any input:
> 
> In the ICA release on 800+ subjects, what is the exact mathematical meaning / 
> definition of the (non-binary) intensities of the different component volumes?
> 
> Are they easily relatable to a statistical map (I am looking to threshold 
> them to study main clusters and possibly correct for multiple comparisons)?
> 
> Thanks in advance, 
> 
> Nicola
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Same data / Multiple comparisons ?

2016-09-14 Thread Stephen Smith
Hi Tom

actually the MegaTrawl latest version is here:
https://db.humanconnectome.org/megatrawl/index.html 


But yes indeed - early on within the HCP we discussed options for trying to 
deal with this large-scale-multiple-comparisons-problem, but quickly agreed 
that it really wasn't our role (and wasn't practical) for us to try to "police" 
on this issue.Another possible form of "policing" would be to keep-back a 
truly-left-out sample of subjects, but there were too many ethical, practical 
and statistical problems with that.   We also (light-heartedly!) discussed 
having some kind of "multiple-comparison sin counter" that ticks up every time 
someone does a new analysis  ;-)   but as you say there was no 
straightforward solution presenting itself...

Cheers.





> On 14 Sep 2016, at 13:52, Thomas Nichols  wrote:
> 
> Dear Don, (Simon?)
> 
> I see two classes of use for the HCP data sets.
> 
> (1)The HCP participant results may be used as norms for comparison with 
> matched participants from whom we capture measures which may be compared.
> 
> (2)The HCP participant results may be used exclusively.
> 
> 
> 
> I think it is only the latter, (2), for which there is a problem although I 
> certainly could be wrong.
> 
> 
> I agree, the first doesn't have multiplicity problems (though accuracy with 
> which you can match subjects & scanner data is another concern). 
>  
> 
>  Tom, you used the scenario of a bunch of labs using the data to do one test 
> each and stated: “…I would say that requires a 'science-wide' correction 
> applied by the reader of the 250 papers. …” 
> 
> That gets at what I’m asking.
> 
> If I’m the author of one of those papers, I don’t want to be fooled or to 
> fool any of my readers with the results from my laboratory by failing to 
> correct for all the other comparisons which have been run on the same data.
> 
> 
> Yes, but the basic problem you, the individual author, face is what sort of 
> correction should you apply.  You only studied variable #132; should you do a 
> correction just for the 20 others in that domain, or all 250?  That's why I 
> think all you can do is be open, and honestly report the scope of variables 
> you considered (and if you did, e.g., search over 20 variables in a domain, 
> correct over those), and report your result.  If the reader collects your 
> result with 50 other papers they can use the appropriate level of criticism 
> for that collection, which will be different from a reader that collects 250 
> papers measures for consideration.
>  
>  If I do that now, perhaps it’s workable to take account of all the work 
> which has appeared to date to do the correction for multiple comparisons. 
> 
> But what about a laboratory which runs some other test 5 years from now? 
> 
> They must use a more stringent criterion given all the additional results 
> which have since been published. 
> 
> At some point, it will become impossible to find a reliable result.
> 
> 
> Exactly.
>  
> 
> Of course, these notions apply to reviewers and other readers too which 
> places a new level of responsibility on them compared with reading papers 
> today.
> 
> For editors and reviewers, the problem is particularly acute.
> 
> If the authors of a paper used the correction criterion suggested by their 
> isolated analysis but a ‘science-wide’ reading calls for a more stringent 
> criterion, do they bounce the paper back or accept it?
> 
>  
> 
> As you point out, Tom, there’s no simple answers to the base question, and 
> there are lots of scenarios which would be worth understanding in this 
> context.
> 
> I wonder if there are those lurking on the list who would consider thinking 
> this through and if they deem it valuable, lay it out formally as a letter or 
> a paper for all of us.
> 
> Those who are most directly involved with the HCP likely have thought about 
> it already and perhaps have something.
> 
> 
> I hope others in the HCP team will chime in, but in our internal discussions 
> we could never arrive at an conclusive action.  That is, the decision was 
> made early on that this is an *open* project; hypotheses will not be recorded 
> and registered, and data kept in a lock-box, only made available to those who 
> agree to study some particular hypothesis (though note, some large scale 
> projects are run exactly like that).  
> 
> Rather, it is left up to authors to honestly report to readers the scope of 
> the variables considered.  Steve Smith's Mega Trawl 
>  
> openly acknowledges the consideration of nearly every behavioral and 
> demographic measure in the HCP.  See also the OHBM COBIDAS report 
> , which implores authors to be 
> completely transparent in variables and hypotheses considered but not 
> necessarily highlighted in a 

Re: [HCP-Users] Basis Function Selection

2016-09-23 Thread Stephen Smith
Hi - in general it's better to use smooth basis functions (eg as used by 
default in FLOBS) than "square" bases like default FIR bases.  
There's some literature on that but I can't quite remember now who published 
that - but it seems logical given that you're fitting smooth data - assuming 
(eg) jittering of events relative to TRs.
Cheers.




> On 22 Sep 2016, at 21:44, Michael Dreyfuss  wrote:
> 
> OK, thank you. I've read both these papers but it's been a while - do you 
> mind if I ask where specifically you see that this misatribution is 
> problematic for events in mixed designs rather than events in an event only 
> design?
> 
> I would be interested in using the FIR function. I'm assuming that using a 
> fixed function is not problematic for modeling the blocks though, right? 
> 
> Unfortunately the fsl wiki appears to be down now, so I can't see how to 
> implement FLOBS. for FIR, I'm assuming I can just modify the fsf files before 
> feat_model.
> 
> Thank you,
> Michael
> 
> On Thu, Sep 22, 2016 at 3:35 PM, Burgess, Gregory  > wrote:
> If you’re referring to what is sometimes called a “state-item” design (cf. 
> http://www.nil.wustl.edu/labs/schlaggar/Publications_files/MIxedBlockPaper_Final.pdf
>  
> ),
>  you should not use a canonical / assumed response shape. That’s because the 
> variance that is not captured by your assumed HRF can be misattributed to 
> your state / sustained regressor.
> 
> For these designs, your event-related effects should be modeled with a basis 
> set that will capture varying response shapes (e.g., FIR or FLOBS) to ensure 
> that you do not misattribute poorly-modeled activation to the sustained 
> regressor. I don’t know much about the inverse logit basis set, but you might 
> consider looking at it too (Lindquist et al. 2009). An advantage of the FIR 
> basis set is that you can easily look for interactions with “time” to test if 
> the response shape varies between regions or individuals.
> 
> Lindquist, M. A., Meng Loh, J., Atlas, L. Y., & Wager, T. D. (2009). Modeling 
> the hemodynamic response function in fMRI: efficiency, bias and mis-modeling. 
> NeuroImage, 45(1 Suppl), S187–98. 
> http://doi.org/10.1016/j.neuroimage.2008.10.065 
> 
> 
> --Greg
> 
> 
> Greg Burgess, Ph.D.
> Staff Scientist, Human Connectome Project
> Washington University School of Medicine
> Department of Psychiatry
> Phone: 314-362-7864 
> Email: gburg...@wustl.edu 
> 
> > On Sep 22, 2016, at 2:05 PM, Michael Dreyfuss  > > wrote:
> >
> > Thank you both.
> >
> > This is for our task which is actually a mixed design. I'm not too 
> > concerned about the blocks because like you say the main goal is estimating 
> > amplitude there. For the jittered events, however, I would want more 
> > flexibility in the basis function because like you said the HRF could have 
> > quite different shapes in different regions and different individuals. 
> > Regardless, the activation patterns I'm seeing seem reasonable. I'm just 
> > wondering if the double gamma is also better fitted to visual cortex and so 
> > activation there is more detectable than in other regions, and if so maybe 
> > activity in other regions would be better detected using a more flexible 
> > basis function like FLOBS of FIR. I think your explanation about proximity 
> > to the head coil may be a big part of that too, though, so I'm reluctant to 
> > assume there is a problem with using double gamma (and there is a cost to 
> > estimating the basis function everywhere too).
> >
> > I will continue to look into these other options...
> >
> > Thanks again,
> > Michael
> >
> > On Thu, Sep 22, 2016 at 2:31 PM, Burgess, Gregory 
> > > 
> > wrote:
> > Hi Michael,
> >
> > A few things:
> > 1) Matt’s point about the increased activation estimates in visual cortex 
> > is a good one. There is increased signal in occipital cortex in functional 
> > connectivity analyses that do not assume a response shape. In part, this 
> > may result from the back of the head being closer to the head coil than 
> > other brain regions (because participants are laying down).
> > 2) To the best of my knowledge, the HCP consortium has not ventured to 
> > recommend a single, ideal HRF for use in task fMRI analysis. In fact, I’d 
> > wager that most people in the consortium expect the hemodynamic response to 
> > vary across brain regions and across people in such a way that there is no 
> > single ideal canonical HRF.
> > 3) We chose the double-gamma during very early analysis of HCP pilot data. 
> > Using 2.5s TR data, the default 

Re: [HCP-Users] MELODIC denoising vs. released ICA-FIX datasets

2016-09-27 Thread Stephen Smith
Hi - it sounds like maybe it's working fine within the limits of slight 
differences in mathematical precision between the C++ vs matlab parts of the 
processing - so the main question would be - have you looked in a viewer at the 
difference image - e.g. are the voxels with large differences isolated or eg at 
the edge of the brain?

Cheers.



> On 28 Sep 2016, at 02:49, Glasser, Matthew  wrote:
> 
> I think this is more of a question for the FSL list, but I don’t know fsl_glm 
> well enough to say if what you are doing is equivalent or not.
> 
> Peace,
> 
> Matt.
> 
> From: "Ely, Benjamin" >
> Date: Tuesday, September 27, 2016 at 7:20 PM
> To: Matt Glasser >, "Burgess, 
> Gregory" >, 
> "HCP-Users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] MELODIC denoising vs. released ICA-FIX datasets
> 
> Hi Matt and Greg,
> 
> Thanks for the feedback! I've looked at the various fix .m files from the 
> current release; based on fix_3_clean.m, I tried the following for a single 
> resting-state run:
> 
> # highpass filter; sigma of 1000.08 = FWHM of 2355 per Smith et al 2013 
> NeuroImage, also consistent with comments in the fix_3_clean.m script
> fslmaths rfMRI_REST1_LR.nii.gz -bptf 1000.08 -1 REST1LR_bp
> 
> # format movement parameters (manually corrected header after paste step, not 
> shown)
> Text2Vest Movement_Regressors.txt Movement_Regressors.mat
> Text2Vest Movement_Regressors.txt Movement_Regressors_dt.mat
> paste Movement_Regressors.mat Movement_Regressors_dt.mat > 
> Movement_Regressors_all.mat
> 
> # regress movement parameters out of timeseries and re-add mean
> fsl_glm -i REST1LR_bp.nii.gz -d Movement_Regressors_all.mat 
> --out_res=REST1LR_bp_mc_demeaned.nii.gz --demean 
> fslmaths REST1LR_bp.nii.gz -Tmean REST1LR_bp_mean
> fslmaths REST1LR_bp_mc_demeaned.nii.gz -add REST1LR_bp_mean.nii.gz 
> REST1LR_bp_mc
> 
> # regress movement parameters out of melodic mix
> 
> fsl_glm -i filtered_func_data.ica/melodic_mix -d Movement_Regressors_all.mat 
> --out_res=melodic_mix_mc --demean
> 
> # regress unique variance from bad components (taken from .fix file) out of 
> timeseries
> 
> fsl_regfilt -i REST1LR_bp_mc.nii.gz -d melodic_mix_mc -o 
> REST1LR_bp_mc_softICA -f "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 
> 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 36, 
> 38, 39, 40, 41, 42, 43, 44, 45, 46, 49, 50, 51, 52, 53, 54, 55, 56, 57, 61, 
> 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84" 
> 
> # compare against HCP's released FIX-denoised file
> fslmaths rfMRI_REST1_LR_hp2000_clean -sub REST1LR_bp_mc_softICA 
> diff_REST1LR_bp_mc_softICA
> 
> Visual inspection and fslstats indicate reasonably good agreement between my 
> denoised file and the HCP's denoised file; the mean difference is about 0.84 
> units (compared to a mean signal intensity of around 10,000), and the 
> "robust" range of the difference is about +/- 72 units. More worryingly, 
> though, the maximum difference is around 2000 units, and around 6000 voxels 
> show differences greater than 500 units, so I'm not sure machine precision 
> can account for the differences.
> 
> Does the above denoising scheme seem consistent with what FIX is doing? I 
> plan to use FIX going forward, rather than trying to replicate it using the 
> FSL command-line, but I'd like to understand any discrepancies between the 
> two. 
> 
> Thanks again,
> -Ely
> 
>  
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 







Re: [HCP-Users] ICA-FIX HP200 training data

2016-11-07 Thread Stephen Smith
Hi - try the following:
www.fmrib.ox.ac.uk/~steve/ftp/HCP20_hp200.RData
Cheers


> On 7 Nov 2016, at 17:45, Ely, Benjamin  wrote:
> 
> Hi HCP team,
> 
> I'm interested in comparing the performance of ICA-FIX using a 200s vs. 2000s 
> highpass filter cutoff for some locally-acquired HCP-like resting-state data, 
> as suggested in a previous thread. The HCP wrapper for ICA-FIX includes a 
> reference to a 200s training file, HCP_hp200.RData, but this doesn't appear 
> to be included in the FIX release. Is this available somewhere else?
> 
> Many thanks,
> -Ely
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] global signal regression on HCP ICA FIX denoised BOLD?

2017-01-05 Thread Stephen Smith
Yes,.
Cheers.


> On 5 Jan 2017, at 11:57, Joelle Zimmermann  
> wrote:
> 
> Hi HCPers,
> 
> Is it correct that global signal regression was not done on the ICA FIX 
> denoised fmri BOLD data?
> 
> Thanks,
> Joelle
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] ICA failure with volumetric ICA-FIX denoised files

2017-03-28 Thread Stephen Smith
Hi - I would strongly recommend doing the group-ICA and dualregression on the 
CIFTI versions of the data, not volumetric.
However, if you really want volumetric versions of 50-dimensional group-ICA 
then we could provide those that have been done via group-ICA on CIFTI data 
which has then been dualregged back into volumetric space and averaged across 
subjects.   Or you could ask your sysadmin to not kill your job after 5 days!

Cheers.



> On 28 Mar 2017, at 17:08, Jenny Gilbert  wrote:
> 
> Hi Matt, 
> 
> I appreciate your help with working through ICA using HCP data. Following 
> your recommendation, I converted CIFTI files to NIFTI and am trying to run 
> through ICA, but I am still experiencing problems. 
> 
> Running ICA for 62 participants (rest1_RL session) on a high power computing 
> server with 72 GB mem and a time limit of 5 days, the ICA times out, and 
> cannot pass the first stage of removing the mean image and normalizing by the 
> voxel-wise variance. This is the script I'm using: 
> melodic -i /projects/niblab/scripts/twin/input_ica.txt --migp --sep_vn -o 
> /projects/niblab/scripts/twin/ica_out -v --nobet --bgthreshold=10 --tr=1 -d 
> 50 --report --mmthresh=0.5 --Oall
> 
> From your experience, should we expect ICA to take 5+ days to run, or is 
> there a way to speed this up?
> 
> Thanks for your help! 
> Jenny
> 
> On Tue, Mar 7, 2017 at 3:48 PM, Glasser, Matthew  > wrote:
> Hi Jenny,
> 
> Indeed you can use wb_command -cifti-convert -to-nifti to convert the files 
> to NIFTI, run melodic, do dual regression, etc, and then when you have your 
> output maps, you convert them back to CIFTI with wb_command -cifti-convert 
> -from-nifti and view them in Workbench.  The only thing to keep in mind is 
> that spatial relationships will not be 3D volumetric in these converted CIFTI 
> files, so things like smoothing would not be appropriate to do on them.  ICA 
> or dual regression doesn’t use spatial neighborhood  information (though you 
> will need to select your own ICA dimensionality).
> 
> Peace,
> 
> Matt.
> 
> From: Jenny Gilbert >
> Date: Tuesday, March 7, 2017 at 2:08 PM
> To: Matt Glasser >
> 
> Subject: Re: [HCP-Users] ICA failure with volumetric ICA-FIX denoised files
> 
> Hi Matt, 
> 
> Thanks for your help! How do I complete ICA the CIFTI MSMAII files? Is the 
> first step to convert the files to NIFTI format in workbench? Also, I planned 
> to complete dual regression with the ICA generated components, so how does 
> dual regression work with CIFTI format data? Can all of this be done by 
> converting CITFI to NIFTI, and then running the usual commands? I'm new to 
> workbench, and would love all the help that you experts can offer! 
> 
> Thank you so much! 
> Jenny
> 
> On Tue, Mar 7, 2017 at 2:05 PM, Glasser, Matthew  > wrote:
> I think there is a bug in melodic, which I also encountered.  I would report 
> this to the FSL list, but it is likely the issue will be fixed in the next 
> FSL release.  
> 
> Also, if you are running melodic in the volume and combining across subjects, 
> this is not recommended because much of the brain is not well aligned in the 
> volume.  It is instead recommended that you use the CIFTI data that has been 
> aligned with MSMAll.  ICA gives very different results depending on whether 
> the data are well aligned or not.  
> 
> Peace,
> 
> Matt.
> 
> From:  > on behalf of Kajsa Igelström 
> >
> Date: Tuesday, March 7, 2017 at 12:40 PM
> To: "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] ICA failure with volumetric ICA-FIX denoised files
> 
> Hi Jenny and others!
> I'm also unable to run these files through Melodic. The most common problem 
> is that it doesn't converge, and under some circumstances it seems like 
> Melodic just stops in the middle of normalizing individual files, at points 
> that are not logical to me, or reproducible. 
> Best, 
> Kajsa 
> 
> 
> 
> On Tue, Mar 7, 2017 at 12:24 PM, Jenny Gilbert  > wrote:
> Hello HCP!
> 
> I am trying to complete ICA using the volumetric ICA-FIX de noised data 
> (rfMRI_REST1_RL_hp2000_clean.nii.gz files), and melodic keeps failing. It 
> looks like melodic gets hung up on some participants cannot progress while 
> removing the mean image and normalizing by the voxel-wise variance.
> 
> Has anyone come across this same problem? Any advice on how to fix?
> 
> Thank you!
> Jenny
> ___
> HCP-Users mailing list
> 

Re: [HCP-Users] Question about the CCA calculation from "A positive-negative mode of population covariation links ..."

2017-04-27 Thread Stephen Smith
Hi - here's some additional code - I *think* this is the relevant code for what 
you're asking about.
Cheers.

% temporary version of vars, useful for various null tests below
grotvars=inormal(vars); grotvars(:,std(grotvars)<1e-10)=[];  
grotvars(:,sum(isnan(grotvars)==0)<20)=[];

% permutation testing
grotRp=zeros(Nperm,Nkeep+1); clear grotRpval; nullNETr=[]; nullSMr=[]; 
nullNETv=[]; nullSMv=[];
for j=1:Nperm
  j
  
[grotAr,grotBr,grotRp(j,1:end-1),grotUr,grotVr,grotstatsr]=canoncorr(uu1,uu2(PAPset(:,j),:));
 grotRp(j,end)=mean(grotRp(j,1:end-1));
  nullNETr=[nullNETr corr(grotUr(:,1),NET)'];  nullSMr=[nullSMr 
corr(grotVr(:,1),grotvars(PAPset(:,j),:),'rows','pairwise')'];
  nullNETv=[nullNETv sum(corr(grotUr,NET).^2,2)]; nullSMv=[nullSMv 
sum(corr(grotVr,grotvars(PAPset(:,j),:),'rows','pairwise').^2,2)];
end
Ncca=sum(grotRpval<0.05)  % number of significant CCA components

% how does the % variance explained compare with the null scenario?
grot1=[ sum(corr(grotU,NET).^2,2) prctile(nullNETv,5,2) mean(nullNETv,2) 
prctile(nullNETv,95,2) sum(corr(uu1,NET).^2,2) ] * 100 / size(NET,2);
grot2=[ sum(corr(grotV,grotvars,'rows','pairwise').^2,2) prctile(nullSMv,5,2) 
mean(nullSMv,2) prctile(nullSMv,95,2) ...
  sum(corr(uu2,grotvars,'rows','pairwise').^2,2)  ] * 100 / size(grotvars,2);
I=1:20; figure;
subplot(2,1,1); hold on;
for i=1:length(I)
  rectangle('Position',[i-0.5 grot1(i,2) 1 
grot1(i,4)-grot1(i,2)],'FaceColor',[0.8 0.8 0.8],'EdgeColor',[0.8 0.8 0.8]);
end
plot(grot1(I,3),'k'); plot(grot1(I,1),'b'); plot(grot1(I,1),'b.'); % 
plot(grot1(I,5),'g');  % turned off showing the PCA equivalent plots
subplot(2,1,2); hold on;
for i=1:length(I)
  rectangle('Position',[i-0.5 grot2(i,2) 1 
grot2(i,4)-grot2(i,2)],'FaceColor',[0.8 0.8 0.8],'EdgeColor',[0.8 0.8 0.8]);
end
plot(grot2(I,3),'k'); plot(grot2(I,1),'b'); plot(grot2(I,1),'b.'); % 
plot(grot2(I,5),'g');



> On 27 Apr 2017, at 09:00, Eleanor Wong  wrote:
> 
> Hi HCP group,
> 
> I have a question regarding how to compute the variance explained after 
> running CCA on the HCP data using the code from 
> http://www.fmrib.ox.ac.uk/datasets/HCP-CCA/ 
> . I ran it on the 461 subject 
> subset with dimensionality reduced to 100 dimensions. The result from the 
> code is very similar to the weights provided on the webpage for the HCP900 
> dataset, but I was unable to compute the % variance explained in the graphs 
> of figure 1c) of paper 
> http://www.nature.com/neuro/journal/v18/n11/full/nn.4125.html 
> . I tried 
> looking at the variance of the scores on the top CCA component divided by the 
> sum of the variance of the scores across all components for the subject 
> measures and the network data, with the Matlab command:
> var(grotU(:,1))/sum(var(grot(U)) but that did not give similar results.
> 
> Thank you for your help.
> 
> Best,
> Eleanor
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] tikhonov-regularized partial correlation using FSLnets

2017-08-17 Thread Stephen Smith
Hi

This normalisation isn't directly part of the actual L2 regularisation - it is 
just setting the overall scaling of the covariance matrix, so that the effect 
of choice of regularisation parameter is not dependent on the overall scaling 
of the original data.

The scaling is just the RMS of the diagonals of the covariance - this is just a 
somewhat arbitrary choice for how to set the overall scaling of the matrix - 
other options such as just taking the mean would probably be fine too.

Cheers, Steve.




> On 17 Aug 2017, at 01:11, Mary Beth  wrote:
> 
> Hi pals,
> 
> I have a question about the way partial correlations were estimated for the 
> megatrawl using FSLnets. Going through the 'ridgep' section of 
> nets_netmats.m, it looks like the covariance matrix for each subject is 
> normalized by the square root of the mean of the variances squared - yes, no, 
> maybe so?
> 
> from line 88 of nets_netmats.m:
> 
> grot = cov();
> grot = grot/sqrt(mean(diag(cov1).^2));
> 
> I'm just trying to figure out why you square the variances before you take 
> the mean. Can someone give me a quick explanation or point me towards a good 
> reference?
> 
> Thanks in advance for your help!
> mb 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Matlab scripts for the HCP 820-subject PTN release

2017-06-03 Thread Stephen Smith
HI - this is an implementation of the variance normalisation used by MELODIC - 
see Beckmann TMI 2004.

The idea is to remove the 30 strongest PCA components (temporarily) and use the 
remaining weakest components (ie residual "noise") to compute the voxelwise 
scalings for the variance normalisation, to be applied to the original data.

30 is indeed somewhat arbitrary, but for this step, increasing it further 
doesn't have a huge impact.

Cheers.




> On 3 Jun 2017, at 01:29, Chihuang Liu  wrote:
> 
> Hi HCP! 
> 
> As I was going through the scripts for generating HCP 820-subject PTN 
> release, there is a step in script "subproc_DO_2_groupPCA.m" that seems quite 
> confusing to me. The part of code is pasted as follows:
> BO=ciftiopen(f{r(1)},WBC); grot=demean(double(BO.cdata)'); clear BO.cdata;
> 
> [uu,ss,vv]=ss_svds(grot,30); vv(abs(vv)<2.3*std(vv(:)))=0; 
> stddevs=max(std(grot-uu*ss*vv'),0.001); 
> grot=grot./repmat(stddevs,size(grot,1),1);  % var-norm
> 
> W=demean(grot); clear grot;
> 
> This is done to every individual run and it's supposed to temporally demean 
> and variance normalize the data. Why is this way (marked in bold) used to 
> compute the stddevs? And how is this number 30 in the ss_svds chosen? It 
> seems to be fixed and not relative with the dimension of ICA. 
> 
> Thanks for any clarification!
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Where are per-subject netmats?

2017-06-01 Thread Stephen Smith
Hi - yes that's all correct.  IIRC for that PTN release (as opposed to the 
previous one) we thought it wasn't worth having subject-specific netmat pconn 
files, but just put all the information in the text files.

Cheers.


> On 1 Jun 2017, at 16:36, Habib Ganjgahi <h.ganjg...@gmail.com> wrote:
> 
> Hi Steve,
> 
> I've extracted the "netmats_3T_HCP820_MSMAll_ICAd15_ts2.tar.gz" file from the 
> HVP_PTN820 folder but it doesn't contain subject specific pconn files. This 
> is what i see inside it:
> 
> Mnet1.pconn.nii 
> 
> Mnet2.pconn.nii
> 
> hierarchy.png
> 
> netmats1.txt
> 
> netmats2.txt
> 
> each pconn file dimension is 15x15 and each text file is a 820x225 matrix. I 
> think each row of the matrix corresponds to different subject netmat matrix, 
> Am i right? if so, is the order of rows as same as the "subjectIDs.txt" file? 
> 
> 
> 
> Bests,
> 
> Habib
> 
> 
> On Thu, Jun 1, 2017 at 3:25 PM, Stephen Smith <st...@fmrib.ox.ac.uk 
> <mailto:st...@fmrib.ox.ac.uk>> wrote:
> Probably clearest to use both terms for clarity, e.g.   "ICA maps, i.e., 
> ICA-based parcellations"
> 
> 
> 
>> On 1 Jun 2017, at 15:21, Harms, Michael <mha...@wustl.edu 
>> <mailto:mha...@wustl.edu>> wrote:
>> 
>> 
>> Would it perhaps be less confusing if the links said “CIFTI Subject-specific 
>> ICA maps for XXX dimensionality”, rather than “Parcellations” ?
>> 
>> -- 
>> Michael Harms, Ph.D.
>> ---
>> Conte Center for the Neuroscience of Mental Disorders
>> Washington University School of Medicine
>> Department of Psychiatry, Box 8134
>> 660 South Euclid Ave.  Tel: 314-747-6173
>> St. Louis, MO  63110  Email: mha...@wustl.edu <mailto:mha...@wustl.edu>
>> 
>> From: <hcp-users-boun...@humanconnectome.org 
>> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Stephen Smith 
>> <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
>> Date: Thursday, June 1, 2017 at 9:07 AM
>> To: Thomas Nichols <t.e.nich...@warwick.ac.uk 
>> <mailto:t.e.nich...@warwick.ac.uk>>
>> Cc: HCP Users <hcp-users@humanconnectome.org 
>> <mailto:hcp-users@humanconnectome.org>>, Habib Ganjgahi 
>> <h.ganjg...@gmail.com <mailto:h.ganjg...@gmail.com>>
>> Subject: Re: [HCP-Users] Where are per-subject netmats?
>> 
>> Hi - subject netmats are in the main PTN download.
>> Cheers.
>> 
>> 
>>> On 1 Jun 2017, at 15:05, Thomas Nichols <t.e.nich...@warwick.ac.uk 
>>> <mailto:t.e.nich...@warwick.ac.uk>> wrote:
>>> 
>>> Hi Jenn,
>>> 
>>> Sorry for the slow reply on this.  We tried downloading the links you 
>>> pointed us to <https://db.humanconnectome.org/data/projects/HCP_1200>:
>>>> 
>>>> The following links contain subject-specific CIFTI maps: subject-specific 
>>>> versions of the group-ICA parcellations, with various ICA dimensionalities 
>>>> applied. These are z-statistic maps generated using dual-regression.
>>>> 
>>>>  CIFTI Subject-specific Parcellations for 10-, 25-, 50-, 
>>>> 100-dimensionalities (57GB) 
>>>> <https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d15_25_50_100.zip>
>>>>  CIFTI Subject-specific Parcellations for 200-dimensionality (55GB) 
>>>> <https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d200.zip>
>>>>  CIFTI Subject-specific Parcellations for 300-dimensionality (85GB 
>>>> <https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources=GroupAvg=HCP_S900_PTNmaps_d300.zip>but
>>>>  they only contain the *dtseries.nii, while in previous releases there 
>>>> were *pconn.nii files that had the "NetMat" matrices.
>>> 
>>> To be clear, the previous *pconn.nii files had #dim rows, #dim cols, while 
>>> these *dtseries have #Elm rows and #dim cols.
>>> 
>>> Is there some place that NetMats are currently released?  I'm not finding 
>>> them.  I know they're not impossible to recreate but I'd like to minimize 
>>> effort and chance of error by using the standard released product.
>>> 
>>> -Tom
>>> 
>>> 
>>> PS: Note that the webpage is incorrectly indicates that the parcellation 
>>> size is 10, 25, 50 and 100, when instead it is 15, 

Re: [HCP-Users] variation across connectomes

2017-05-01 Thread Stephen Smith
Hi Michael - I think Joelle is asking about tractography instead of func conn.  
I expect it will be some time before fuller sets of tractography analyses have 
been done.
Cheers.


> On 2 May 2017, at 01:17, Harms, Michael  wrote:
> 
> 
> Well, there’s the canonical correlation analysis of rfMRI network edges vs. 
> subject measures in:
> http://www.ncbi.nlm.nih.gov/pubmed/26414616 
> 
> 
> cheers,
> -MH
> 
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu
> 
> From: Joelle Zimmermann  >
> Date: Monday, May 1, 2017 at 11:09 AM
> To: Michael Harms >
> Cc: "Glasser, Matthew" >, 
> "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] variation across connectomes
> 
> cool thank you. The MegaTrawl was done on the FC - subject measures only 
> right? Is there any such analysis coming up for SC - subject measures?
> 
> On Sat, Apr 29, 2017 at 9:46 PM, Harms, Michael  > wrote:
>> 
>> Hi,
>> Have you read the documentation for the MegaTrawl?
>> 
>> cheers,
>> -MH
>> 
>> -- 
>> Michael Harms, Ph.D.
>> ---
>> Conte Center for the Neuroscience of Mental Disorders
>> Washington University School of Medicine
>> Department of Psychiatry, Box 8134
>> 660 South Euclid Ave.Tel: 314-747-6173 
>> St. Louis, MO  63110Email: mha...@wustl.edu 
>> 
>> From: > > on behalf of Joelle 
>> Zimmermann > >
>> Date: Saturday, April 29, 2017 at 10:30 AM
>> To: "Glasser, Matthew" >
>> Cc: "hcp-users@humanconnectome.org " 
>> >
>> 
>> Subject: Re: [HCP-Users] variation across connectomes
>> 
>> thanks Matt. Could you explain a bit the 'Correlation/prediction results for 
>> subject measures' ?
>> 
>> are those the measures that predict variation across subjects for the 
>> different components? which measures predict the variation most strongly?
>> 
>> apologies for the basic questions - im quite new to the technique.
>> 
>> On Sat, Apr 29, 2017 at 10:52 AM, Glasser, Matthew > > wrote:
>>> Undoubtably.  Perhaps the megatrawl would be of interest:
>>> 
>>> https://db.humanconnectome.org/megatrawl/ 
>>> 
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> From: Joelle Zimmermann >> >
>>> Sent: Saturday, April 29, 2017 9:15:31 AM
>>> To: Glasser, Matthew
>>> Cc: hcp-users@humanconnectome.org 
>>> Subject: Re: [HCP-Users] variation across connectomes
>>>  
>>> Not necessarily.. Just curious where the variation comes from whether it 
>>> can be attributed to particular variables. I'm doing a PCA for variation 
>>> across subject connectomes (for ex for SC), see a "common" component, but 
>>> there are additional components, some of which for example correlate very 
>>> strongly with age. And i want to check if there's other such variables that 
>>> may explain some of the additional components.
>>> 
>>> Thanks,
>>> Joelle
>>> 
>>> On Fri, Apr 28, 2017 at 6:12 PM, Glasser, Matthew >> > wrote:
 What is it that you are trying to do?  Control for uninteresting sources 
 of variance?
 
 Peace,
 
 Matt.
 
 From: > on behalf of Joelle 
 Zimmermann >
 Date: Friday, April 28, 2017 at 2:17 PM
 To: "hcp-users@humanconnectome.org " 
 >
 Subject: [HCP-Users] variation across connectomes
 
 Hi HCPers,
 
 I'm looking at variation across SC and FC connectomes of subjects. I was 
 wondering due to which variables we could potentially expect variability 
 across subjects to arise?
 
 I've looked into acquisition, fmri 

Re: [HCP-Users] Where are per-subject netmats?

2017-06-01 Thread Stephen Smith
Hi - subject netmats are in the main PTN download.
Cheers.


> On 1 Jun 2017, at 15:05, Thomas Nichols  wrote:
> 
> Hi Jenn,
> 
> Sorry for the slow reply on this.  We tried downloading the links you pointed 
> us to :
> 
> The following links contain subject-specific CIFTI maps: subject-specific 
> versions of the group-ICA parcellations, with various ICA dimensionalities 
> applied. These are z-statistic maps generated using dual-regression.
> 
>  CIFTI Subject-specific Parcellations for 10-, 25-, 50-, 100-dimensionalities 
> (57GB) 
> 
>  CIFTI Subject-specific Parcellations for 200-dimensionality (55GB) 
> 
>  CIFTI Subject-specific Parcellations for 300-dimensionality (85GB 
> 
> but they only contain the *dtseries.nii, while in previous releases there 
> were *pconn.nii files that had the "NetMat" matrices.
> 
> To be clear, the previous *pconn.nii files had #dim rows, #dim cols, while 
> these *dtseries have #Elm rows and #dim cols.
> 
> Is there some place that NetMats are currently released?  I'm not finding 
> them.  I know they're not impossible to recreate but I'd like to minimize 
> effort and chance of error by using the standard released product.
> 
> -Tom
> 
> 
> PS: Note that the webpage is incorrectly indicates that the parcellation size 
> is 10, 25, 50 and 100, when instead it is 15, 25, 50 & 100.
> 
> 
> 
> On Tue, May 16, 2017 at 1:12 PM, Elam, Jennifer  > wrote:
> Hi Tom, 
> Due to size, we have the individual subject parcellations available in three 
> separate downloads for different dimensionalities under the main PTN download 
> on this page in the DB: https://db.humanconnectome.org/data/projects/HCP_1200 
> 
> 
> Best, 
> Jenn
> 
> Jennifer Elam, Ph.D.
> Scientific Outreach, Human Connectome Project
> Washington University School of Medicine
> Department of Neuroscience, Box 8108
> 660 South Euclid Avenue
> St. Louis, MO 63110
> 314-362-9387
> e...@wustl.edu 
> www.humanconnectome.org 
> 
> From: hcp-users-boun...@humanconnectome.org 
>  
>  > on behalf of Thomas Nichols 
> >
> Sent: Tuesday, May 16, 2017 5:16:15 AM
> To: HCP Users
> Subject: [HCP-Users] Where are per-subject netmats?
>  
> Hi folks,
> 
> When poking through the PTN download for the netmats, we're having trouble 
> finding the netmat for each subject.
> 
> As per the S1200 release manual 
> ,
>  pp 99-100, it says when we extract one of the flies like
>   netmats_3T_HCP820_MSMAll_ICAd*_ts*.tar.gz
> we should get a *_netmat1 directory filled with "One netmat file per subject, 
> computed using full correlation, Z-transformed", and another variant in 
> *_netmat2.  
> 
> Instead, we find that these tar.gz files only have 5 files, in a directory 
> netmats/3T_HCP820_MSMAll_ICAd*_ts*.  Two of these are netmat{1,2}.txt files, 
> but these are a single column and have a very strange number of rows (e.g. 
> for d=50 it has 820 rows).  (There are also Mnet?.pconn.nii files, but these 
> are tiny).
> 
> Where can we find the per-subject netmat files that were in previous releases?
> 
> -Tom
> 
> 
> -- 
> __
> Thomas Nichols, PhD
> Professor, Head of Neuroimaging Statistics
> Department of Statistics & Warwick Manufacturing Group
> University of Warwick, Coventry  CV4 7AL, United Kingdom
> 
> Web: http://warwick.ac.uk/tenichols 
> Email: t.e.nich...@warwick.ac.uk 
> Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
> Fx,  +44 24 7652 4532 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 
> 
> 
> -- 
> __
> Thomas Nichols, PhD
> Professor of Neuroimaging Statistics
> Oxford Big Data Institute
> Li Ka Shing Centre for Health Information and Discovery
> University of Oxford
> 
> Interim Contact Info:
> Web: 

Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-05 Thread Stephen Smith
Hi - I think your main two choices are whether to run FIX on each 5min run 
separately, or to preprocess and concatenate each pair of scans from each 
session and run FIX for each of the 4 paired datasets.  You could try FIX both 
ways on a few subjects and decide which is working better.

Cheers.



> On 5 Oct 2017, at 22:53, Sang-Young Kim  wrote:
> 
> Dear Experts:
> 
> We have acquired rfMRI dataset with A-P and P-A phase encoding direction and 
> the data was acquired in eight 5-minute runs split across four imaging 
> sessions. We have processed the data using HCP pipelines (e.g., 
> PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and 
> ICA+FIX). So we have results for each run of rfMRI data. 
> 
> I’m just curious about what is recommended way to combine each run of data 
> (e.g., rfMRI_REST1_AP, rfMRI_REST1_PA, …, rfMRI_REST4_AP, rfMRI_REST4_PA). 
> 
> Can we just temporally concatenate each run of data before running ICA+FIX?
> Or can we do group ICA using each data processed with ICA+FIX?
> What is the optimal way to do combining analysis across each run? 
> 
> Any insights would be greatly appreciated. 
> 
> Thanks. 
> 
> Sang-Young Kim
> ***
> Postdoctoral Research Fellow
> Department of Radiology at University of Pittsburgh
> email: sykim...@gmail.com
> ***  
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] netmats prediction of fluid intelligence

2017-10-06 Thread Stephen Smith
Hi all

Yes - I've discussed this with Todd and it's not immediately clear whether the 
difference is due to:
 - they used full correlation not partial
 - they used fewer confound regressors (IIRC)
 - their prediction method is *very* different (pooling across all relevant 
features rather than keeping them separate in the multivariate elastic net 
regression prediction).

Or some combination of all of this.  I don't have a strong gut feeling which of 
these might be the biggest factor, but we should note that the Finn paper took 
a lot more care over many aspects of their analysis than many studies do, and 
in particular it was impressive how they got replication of the prediction 
between completelty separate studies. But yes I would be interested to see this 
resolved more.

With respect to our CCA-based population mode, which covaried more highly with 
the intelligence measure as you mentioned - I think maybe this points at the 
main issue possibly being the noisiness of the individual features (netmat 
edges) and also of the intelligence feature (when all combined together within 
the elastic net prediction framework).

Cheers.




> On 6 Oct 2017, at 04:11, Harms, Michael  wrote:
> 
>  
> In the context of the long resting state runs that we have available, I would 
> argue that throwing in additional possible confounds is the appropriate thing 
> to do.  Are you suggesting that sex, age, age^2, sex*age, sex*age^2, brain 
> size, head size, and average motion shouldn’t all be included?
>  
> Regardless, r = 0.21 (without confounds in the MegaTrawl) is a long way from 
> the r = 0.5 prediction in Finn et al.
>  
> Cheers,
> -MH
>  
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.Tel: 314-747-6173
> St. Louis, MO  63110  Email: 
> mha...@wustl.edu 
>  
> From:  > on behalf of Thomas Yeo 
> >
> Date: Thursday, October 5, 2017 at 10:01 PM
> To: "Glasser, Matthew" >
> Cc: "hcp-users@humanconnectome.org " 
> >
> Subject: Re: [HCP-Users] netmats prediction of fluid intelligence
>  
> Certainly one difference is that HCP (i.e., Steve) tends to take the more 
> conservative approach of regressing a *lot* of potential confounds, which 
> tends to result in a lower prediction values. You can see that without 
> confound regression, Steve's prediction is 0.21 versus 0.06. 
>  
> Regards,
> Thomas
>  
> On Fri, Oct 6, 2017 at 1:44 AM, Glasser, Matthew  > wrote:
>> Perhaps there is an issue related to data clean up or alignment of brain 
>> areas across subjects.  The Finn study does not appear to have followed the 
>> recommended approach to either.
>>  
>> Peace,
>>  
>> Matt.
>>  
>> From: > > on behalf of Benjamin Garzon 
>> >
>> Date: Thursday, October 5, 2017 at 1:39 PM
>> To: "hcp-users@humanconnectome.org " 
>> >
>> Subject: [HCP-Users] netmats prediction of fluid intelligence
>>  
>> Dear HCP experts, 
>>  
>> I'm trying to reconcile the MegaTrawl prediction of fluid intelligence 
>> (PMAT24_A_CR)
>>  
>> https://db.humanconnectome.org/megatrawl/3T_HCP820_MSMAll_d200_ts2/megatrawl_1/sm203/index.html
>>  
>> 
>>  
>> (which shows r = 0.06 between predicted and measured scores)
>>  
>> with the Finn 2015 study 
>>  
>> https://www.nature.com/neuro/journal/v18/n11/full/nn.4135.html 
>> 
>>  
>> claiming an r = 0.5 correlation between predicted and measured scores. In 
>> the article they used a subset of the HCP data (126 subjects), but the 
>> measure of fluid intelligence is the same one. What can explain the 
>> considerable difference? As far as I can see the article did not address 
>> confounding, but even in that case r = 0.21 for MegaTrawl, which is still 
>> far from 0.5. And this considering that the model used in the article is a 
>> much simpler one than the MegaTrawl elastic net regressor.   
>>  
>> I've been trying to predict fluid intelligence in an independent sample with 
>> 300 subjects and a netmats + confounds model does not perform better than a 

Re: [HCP-Users] RSN masks for 32K surfaces?

2017-10-16 Thread Stephen Smith
HI - yes see the HCP "PTN" data release.
Cheers.


> On 16 Oct 2017, at 16:10, Claude Bajada  wrote:
> 
> Hello all,
> 
> Are there any canonical resting state network masks available for the
> MSMAll 32k HCP surfaces as a gifti or cifti file? If so, where can I
> obtain them?
> 
> Claude
> 
> 
> 
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] parcellation papaya viewer

2017-10-01 Thread Stephen Smith
Hi

The actual group-ICA maps are not thresholded - but yes there is a preset 
colour overlay threshold applied by default in the papaya viewer.

For the latest (~1000 subjects) PTN release, the volumetric maps were created 
within-subject using dual-regression, and are z-stat maps from the second-stage 
of the dual regression.  Strictly, for each 15min run they are tstats, but the 
temporal DOF is so high that it's not much of a distinction between that and Z. 
They are then treated as Z when combining across the 4 runs, and again when 
averaging across all subjects.  So in theory the final Z maps should follow Z 
distribution (up to an overall N-dependent scaling factor), but there is so 
much "signal" around from this many subjects that it's a little hard to test 
the null for its exact width.

Cheers, Steve.



> On 29 Sep 2017, at 18:54, Jenny Gilbert  wrote:
> 
> Quick question about the papaya viewer interface associated with the 
> Group-ICA parcellation used in megatrawl: is the activation threshold 
> (default is set to 10 to 40) used t-value or z-value? 
> 
> Thanks, 
> Jenny Gilbert
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] question about noise components in PTN release

2018-05-16 Thread Stephen Smith
Hi that’s right in the most recent PTN release the number of “junk” components 
is much lower, and also less unambiguously junk- so at this point we have not 
tried to classify them  
Cheers 


Stephen M. Smith,  Professor of Biomedical Engineering
Head of Analysis,   Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington,
Oxford. OX3 9 DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.uk
http://www.fmrib.ox.ac.uk/~steve
--

> On 16 May 2018, at 23:03, Glasser, Matthew  wrote:
> 
> I don’t think these were the same components.  There are probably a couple of 
> noise components in the PTN release but I don’t know if anyone went through 
> and numbered them.  
> 
> Peace,
> 
> Matt.
> 
> From:  on behalf of "Shearrer, Grace" 
> 
> Date: Wednesday, May 16, 2018 at 1:58 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] question about noise components in PTN release
> 
> Hello all, 
> a graduate student and I are working on an analysis with the 100 
> dimensionality PTN release. We are a bit confused about noise components in 
> the 100 dim PTN release. In the Smith et al. 2013 neuroimage paper it 
> mentions  that 78 of the 100 components were signal, and 22 were noise. 
> However, I cannot figure out which are signal or noise from the paper as it 
> appears to re-number the components for the figures. 
> 
> Is there a list of the 22 noise components in the 100 dimensionality PTN 
> release as mentioned in Smith et al 2013? 
> 
> I would like to ignore those noise components as suggested here 
> https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/DualRegression/UserGuide.
> 
> Thank you,
> Grace
> 
> PS: I realize I could examine by hand which is noise or signal but that seems 
> redundant if someone already has
> 
> Grace Shearrer, PhD
> Post Doctoral Researcher
> Neuropsychology of Ingestive Behavior Lab 
> University of North Carolina Chapel Hill
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Smoothing, discarding volumes

2018-01-03 Thread Stephen Smith
Hi


> On 3 Jan 2018, at 14:44, Tobias Bachmann 
>  wrote:
> 
> Dear all,
> 
> when using the HCP's ICA+FIX data (NIfTI volume files) for a rather simple 
> group ICA, would you recommend further conventional preprocessing, i.e.
> 
> 1. smoothing (about which conflicting info is to be found)

spatial or temporal?

The simple answer is in general no, although for the purposes of group-ICA only 
(and not any later subject-specific analyses like dual regression), it might be 
the case that some lowpass temporal filtering could boost effective CNR.

> 2. discarding the first few volumes (I found hardly anything regarding this 
> issue)?

For most purposes it's not necessary, though there are some slight residual 
starting effects in the data, so you could delete a few.

Cheers.



> 
> Kind regards,
> Tobias Bachmann
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] ICA-FIX

2018-04-06 Thread Stephen Smith
Hi - I would think the easiest and best thing to do would be just to apply more 
aggressive high-pass filtering to the already FIX-cleaned data.
Was there a reason you didn't want to do that?   It's not at all a problem that 
the less aggressive HP filter had already been applied before original FIX 
cleaning.
Cheers.


> On 6 Apr 2018, at 10:41, Megan Ní Bhroin  wrote:
> 
> Hi there, 
> 
> I'm very new to Neuroimaging but wondering if you could help me with some 
> issues I am having. 
> 
> I want to run the ICA FIX pipeline on HCP resting state data. I wish to do 
> this as I would like to apply a more aggressive temporal filtering than is 
> already in place. 
> 
> Do you have any suggestions on how to alter the script, the code already in 
> place has a cut off of 2000s but I wish to apply a high pass temporal filter 
> of 69.444 and low pass temporal filter of 0.7716.
> 
> echo "processing FMRI file $fmri with highpass $hp"
> 
> if [ $hp -gt 0 ] ; then
>   echo "running highpass"
>   hptr=`echo "10 k $hp 2 / $tr / p" | dc -`
>   ${FSLDIR}/bin/fslmaths $fmri -bptf $hptr -1 ${fmri}_hp$hp
>   fmri=${fmri}_hp$hp
> fi
> 
> 
> I believe the correct data to run this script on is Resting State fMRI 1 & 2 
> Preprocessed, can you point me towards the exact files to run this on?
> 
> For this script to run successfully I believe that I first have to modify the 
> settings.sh file to my own environment for the ICA FIX bash script to run, is 
> this correct?
> 
> Thank you for your time in advance.
> Best, 
> Megan
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradient nonlinearity correction question

2018-04-11 Thread Stephen Smith
Hi - no, in my experience running the post-hoc correction should look virtually 
identical to the on-scanner correction.

Note that this will not be the case for 2D (eg EPI) data because on-scanner can 
only be done 2D - so that won't match post-hoc 3D correction.

Cheers.




> On 11 Apr 2018, at 17:51, Kristian Loewe  wrote:
> 
> Hi Michael,
> 
> Thanks for the suggestion! By default, we are using the 3D correction on 
> the scanner. But I applied the 2D correction to the ND data set 
> separately to see what it looks like. The result is different from both 
> the 3D-scanner-correction and the offline-correction but looks 
> reasonable too.
> 
> Is it the case that for HCP data the scanner-corrected images look 
> almost exactly the same as the offline-corrected images or is it 
> ok/normal for them to differ a bit, especially further away from the 
> isocenter?
> 
> Cheers,
> 
> Kristian
> 
> 
> On 11.04.2018 15:42, Harms, Michael wrote:
>> 
>> One thought: Are you sure you are using the “3D” (and not the “2D”) 
>> correction on the scanner?
>> 
>> --
>> Michael Harms, Ph.D.
>> 
>> ---
>> 
>> Associate Professor of Psychiatry
>> 
>> Washington University School of Medicine
>> 
>> Department of Psychiatry, Box 8134
>> 
>> 660 South Euclid Ave.Tel: 314-747-6173
>> 
>> St. Louis, MO  63110  Email: mha...@wustl.edu
>> 
>> On 4/11/18, 8:31 AM, "hcp-users-boun...@humanconnectome.org on behalf of 
>> Kristian Loewe" > k...@kristianloewe.com> wrote:
>> 
>> Hi Joo-won and Keith,
>> 
>> I don't think that the table has been moved. Is there any information
>> somewhere in the dicom header to double-check this?
>> 
>> I am using the coeff_AS82.grad file for the Prisma data (the second
>> command). The first command was what I used for the Verio data. Also,
>> I double-checked that I am using the uncorrected volume as input.
>> 
>> I am not sure if I am supposed/allowed to send screenshots of the
>> actual data in the subject's native space to the list. I'm going to
>> check that. Meanwhile, I asked our local MR team to acquire an
>> additional data set using a phantom. The difference between the
>> scanner-corrected image and the offline-corrected image is not as
>> striking in this case but it's visible. Based on the phantom data, I
>> am inclined to say that both corrections work reasonably well, even
>> though they are not exactly the same. I also tried the
>> offline-correction on some phantom EPI data and it seems to work well
>> as it (together with EPI distortion correction) restores the original
>> shape of the phantom pretty nicely.
>> 
>> The differences between the correction variants applied to the
>> subject's data are actually rather small inside the brain but become
>> larger towards the neck, which is to be expected as the distance to
>> the magnetic isocenter becomes greater in that direction.
>> Nevertheless, I orginally thought that the differences between
>> scanner- and offline-corrected images would be smaller than that.
>> 
>> Find attached some plots of the T1 phantom data:
>> 
>> T1_ND*.png - uncorrected images
>> T1*.png- scanner-corrected images
>> T1_ND_gdc*.png - offline-corrected images
>> 
>> Cheers,
>> 
>> Kristian
>> 
>> 
>> PS:
>> I am sending this email for the second time (apparently the attachment
>> was too large). I am not sure if the first email was successfully
>> cancelled. I apologize if you are receiving this twice now.
>> 
>> 
>> Quoting "Kim, Joo-won" :
>> 
>>> Hi Kristian,
>>> 
>>> Have you moved table? If you moved the table, you should manually
>>> subtract it from qform, sform, and/or affine matrix in the nifty
>>> header.
>>> 
>>> Best,
>>> Joo-won
>>> 
>>> ---
>>> Joo-won Kim, Ph.D.
>>> Postdoctoral Fellow
>>> Translational and Molecular Imaging Institute
>>> Icahn School of Medicine at Mount Sinai
>>> 
>>> 
>>> From:  on behalf of Keith
>>> Jamison 
>>> Date: Monday, April 9, 2018 at 10:36 AM
>>> To: Kristian Loewe 
>>> Cc: HCP Users 
>>> Subject: Re: [HCP-Users] gradient nonlinearity correction question
>>> 
>>> First make sure you're using the right coefficient file, copied
>>> directly from the scanner.  The Prisma should have a file called
>>> coeff_AS82.grad, so the one you used in your *second* command above
>>> should be correct.
>>> Second is to be absolutely sure your input is the uncorrected volume (*_ND).
>>> If you include some matched screenshots of the uncorrected,
>>> offline-corrected, and scanner-corrected volumes, we can maybe help
>>> evaluate the difference.
>>> 
>>> 
>>> On Mon, Apr 9, 2018 at 4:09 AM, Kristian Loewe
>>> > wrote:
>>> Thanks Keith,
>>> 
>>> Cropping 

Re: [HCP-Users] Melodic ICA idle

2018-03-20 Thread Stephen Smith
Hi - if you don't have your data in CIFTI then you would have to dualregress 
NIFTI/volumetic existing RSN maps into your data, as I think Michael said - 
which isn't as optimal.
Cheers.


> On 20 Mar 2018, at 01:53, Will Khan <khan.wa...@florey.edu.au> wrote:
> 
> Dear HCP members/Users, 
> 
> Thanks so much for your prompt advice and suggestions! 
> 
> Steve, if I were to perform ICA on the grayordinates would I still be able to 
> dualregress this onto a patient nifti dataset? Like others have suggested, 
> wouldn't my patient data need to be in cifti?
> 
> Cheers :)
> 
> Will
> 
> 
> From: Glasser, Matthew <glass...@wustl.edu <mailto:glass...@wustl.edu>>
> Sent: 14 March 2018 01:14:51
> To: Harms, Michael; Stephen Smith; Will Khan
> Cc: hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>; 
> Erin W. E. Dickie
> Subject: Re: [HCP-Users] Melodic ICA idle
>  
> I would recommend the ciftify toolbox if your patient data do not meet HCP 
> Pipelines acquisition requirements (lacking highres T2w scan and or 
> Fieldmap).  The ciftify toolbox is currently beta, but should be fully 
> available in the very near future (Erin Dickie the author is CCed and can 
> give more details).  It is intended to assist users with legacy data in 
> processing using HCP-Style analyses including dual regression in CIFTI.  If 
> you need volume maps for your patients too, as Steve says you can do the 
> first stage of dual regression in CIFTI to get the timecourses and then 
> regress these into both the CIFTI and volume data.  We do this all the time 
> if we need to see what is going on outside the greymatter (e.g. we are 
> denoising data).  
> 
> I will note that even fMRI data with coarse voxel resolution (e.g. 4mm 
> isotropic) will still benefit from HCP-Style analyses over traditional 
> analyses (see in the attached figure that the 4mm dots are essentially all 
> above the line of equality between volume and surface).  The attached figure 
> does not consider the effects of volume-based smoothing which are even more 
> serious than volume-based alignment 
> (https://www.biorxiv.org/content/early/2018/01/29/255620 
> <https://www.biorxiv.org/content/early/2018/01/29/255620>).  
> 
> Of course in the future we recommend acquiring HCP Style data (fMRI 2.4mm 
> isotropic or better, 1s TR or better; T1w and T2w 0.8mm isotropic or better, 
> and phase reversed field maps) and processing with the HCP Pipelines or an 
> equivalent or better approach.  
> 
> Peace,
> 
> Matt.
> 
> From: <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of "Harms, Michael" 
> <mha...@wustl.edu <mailto:mha...@wustl.edu>>
> Date: Tuesday, March 13, 2018 at 7:48 AM
> To: Stephen Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>, Will 
> Khan <khan.wa...@florey.edu.au <mailto:khan.wa...@florey.edu.au>>
> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Melodic ICA idle
> 
>  
> I think the issue, as I read it, is that Will’s data is only NIFTI currently, 
> so he doesn’t have any subject CIFTI that he could use for the stage 1 of 
> dual reg.
>  
> Our suggestion of course to remedy that would be that you process your data 
> into CIFTI, using the HCP Pipelines. ☺
>  
> Cheers,
> -MH
>  
> -- 
> Michael Harms, Ph.D.
> ---
> Associate Professor of Psychiatry
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu 
> <mailto:mha...@wustl.edu>
>  
> From: <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Stephen Smith 
> <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
> Date: Tuesday, March 13, 2018 at 2:31 AM
> To: Will Khan <khan.wa...@florey.edu.au <mailto:khan.wa...@florey.edu.au>>
> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Melodic ICA idle
>  
> Hi Will
>  
> I'm not aware of any outstanding bugs in melodic that would cause it to 
> silently hang.  Are you sure it's not just that you've run out of RAM and are 
> swapping?
>  
> Yes it's better to do group-ICA on gr

Re: [HCP-Users] A question about the data in native space

2018-03-20 Thread Stephen Smith
Hi - you could fairly easily use the FIX bad-component timeseries to regress 
out noise from the native space data, yes.
I don't think that data is already precomputed for you.
Cheers.



> On 20 Mar 2018, at 10:44, Aaron C  wrote:
> 
> Dear HCP experts,
> 
> I have a question about obtaining the data in native space. Could I get the 
> rfMRI ICA-FIX cleaned data in each subject's native space (without 
> cross-subject normalization)? Thank you.
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> 

---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] can't find age, sex, etc in megatrawl?

2018-10-18 Thread Stephen Smith
Hi
No I don’t think we explicitly calculated those things. 
Cheers. 


Stephen M. Smith,  Professor of Biomedical Engineering
Head of Analysis,   Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington,
Oxford. OX3 9 DU, UK
+44 (0) 1865 610470
st...@fmrib.ox.ac.uk
http://www.fmrib.ox.ac.uk/~steve
--

> On 18 Oct 2018, at 15:04, Keith Jamison  wrote:
> 
> We just noticed that certain subject measures such as age and sex are not 
> included in the megatrawl results. I see in the release documentation that 
> these are among the covariates removed before regressing the other 
> quantities. Are there also results/stats somewhere from regressing netmats 
> against these quantities themselves?
> 
> Thanks!
> -Keith
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users