Re: [HCP-Users] Troubleshooting error with gradunwarp

2019-06-27 Thread Glasser, Matthew
Thanks Mike.

Matt.

From: "Harms, Michael" 
Date: Thursday, June 27, 2019 at 3:02 PM
To: "Jayasekera, Dinal" 
Cc: "Glasser, Matthew" 
Subject: Re: [HCP-Users] Troubleshooting error with gradunwarp


Hi Dinal,
This issue should be resolved with the commit that I just pushed to the master.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From: "Harms, Michael" 
Date: Thursday, June 27, 2019 at 12:35 PM
To: "Jayasekera, Dinal" , 
"hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Troubleshooting error with gradunwarp


Hi,
That sounds like this issue, which we haven’t patched quite yet:
https://github.com/Washington-University/HCPpipelines/issues/119

Incidentally, you must be running off code in the current master branch, rather 
than the formally tagged “v4.0.0” release 
(https://github.com/Washington-University/HCPpipelines/releases) since this 
isn’t any issue in the tagged release.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From:  on behalf of "Jayasekera, Dinal" 

Date: Thursday, June 27, 2019 at 12:05 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Troubleshooting error with gradunwarp

Dear all,

I'm trying to troubleshoot an error that I'm getting when running v4.0 
PreFreeSurferPipelineBatch. I initially thought the error arose as a result of 
grandunwarp but I don't think that is the case anymore. This is the error I am 
receiving:

'/media/functionalspinelab/RAID/Data/Dinal/mystudy/NSI_21/T2w/T2wToT1wDistortionCorrectAndReg/FieldMap/TopupField.nii.gz'
 and 
'/media/functionalspinelab/RAID/Data/Dinal/mystudy/NSI_21/T2w/T2wToT1wDistortionCorrectAndReg/FieldMap/TopupField.nii.gz'
 are the same file

I have attached the full output to stdout. Any insights?

Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MSMAll Failure Following hcp-fix_multi-run

2019-06-23 Thread Glasser, Matthew
Please upgrade to the latest version of the HCP Pipelines (the latest master 
has a few fixes relative to 4.0.0), the latest FSL 6.0.1, and the latest FIX.

Matt.

From:  on behalf of Timothy Hendrickson 

Date: Sunday, June 23, 2019 at 4:32 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] MSMAll Failure Following hcp-fix_multi-run

Hello,

I am having issues running MSMAll following hcp-fix_multi-run for a study that 
I am doing processing for. The particular study that I am processing data for 
has 6 fMRI scans totalling 2950 timepoints per scanning session. I am certainly 
aware of the large amount of memory that hcp-fix_multi-run requires as it is 
fed more timepoints, however, this is completing without issue. I am running 
HCP version 3.27.0 with MSMAll v2 and FSL 5.0.11.

Is there a way to get around this error?
If I end up having to break up hcp-fix_multi-run do you have any best practices 
on how to do this (i.e. should it be a mix of tfMRI and rsfMRI, etc)?

Best,

-Tim
Timothy Hendrickson
Neuroimaging Analyst/Staff Scientist
University of Minnesota Informatics Institute
University of Minnesota
Bioinformatics M.S. Candidate
Office: 612-624-0783
Mobile: 507-259-3434 (texts okay)

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] matlab cifti io

2019-06-23 Thread Glasser, Matthew
Look at option B here: 
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ

Matt.

From:  on behalf of Aaron R 

Date: Sunday, June 23, 2019 at 9:04 AM
To: hcp-users 
Subject: [HCP-Users] matlab cifti io

Dear HCP users,

What's the current link for the matlab cifti i/o tools? The link in the readme 
here (and everywhere else) is dead: 
https://github.com/Washington-University/cifti-matlab

Thanks,
Aaron

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] table for parcellation

2019-06-21 Thread Glasser, Matthew
(ii) Yes that is the right file.  I would use the already concatenated one from 
MR+FIX, else you need to use the _vn files to concatenate yourself (demean, 
divide by _vn files, and merge).

Matt.

From:  on behalf of Timothy Coalson 

Date: Friday, June 21, 2019 at 2:20 PM
To: Marta Moreno 
Cc: HCP Users 
Subject: Re: [HCP-Users] table for parcellation

(i) You can use wb_command -cifti-label-export-table on the dlabel file to get 
the order of the parcels in a fixed format, though there are extra lines and 
numbers in the output text file.  360 parcels is a rather long table, you might 
consider a matrix figure instead, and only mention the highlights.  If you make 
the figure in wb_view, then you can upload the scene to BALSA, which will 
include the data files used to make it, allowing others to use your data 
directly instead of looking through a table.

Someone else will need to answer (ii).

Tim


On Thu, Jun 20, 2019 at 10:32 PM Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:
Dear experts,

After running:

1.  PreFreeSurfer
2.  FreeSurfer
3.  PostFreeSurfer
4.  fMRIVolume
5.  fMRISurface
6.  ICA+FIX (MR+FIX)
7.  MSMAll (Do MSM Surface Registration)
8.  dedrift and resample pipeline

2 questions:

(i) I used wb_command -cifti-parcellate and wb_command -cifti-correlation to 
create a parcellation.pconn.nii file per each subject. I have uploaded all 
files into matlab and now would like to create a table that includes all 
subjects’ correlation value from parcel 264 to all 360 parcels, and include the 
variable names for all 360 parcels. How can I do this?

(ii) Is the file “RS_fMRI_MR_Atlas_MSMAll_Test_hp0_clean.dtseries.nii” the 
final step after (8) to use for wb_command -cifti-parcellate and wb_command 
-cifti-correlation

Thanks,

L.



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

2019-06-20 Thread Glasser, Matthew
The more timepoints, the more components, so that aspect seems okay.  As far as 
the new beta MR+FIX training file, I’ll send you a link off list.  I would use 
that.

Matt.

From: Yizhou Ma 
Date: Thursday, June 20, 2019 at 4:36 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" , Timothy 
Hendrickson 
Subject: Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

Dear Matt,

I am writing to follow up with this issue. Per your suggestion 2), I looked at 
the number of signal components in my multi-run ICA-FIX of 20 healthy controls. 
Again, my data are HCP-style. Each subject has 4 resting state scans (TR=0.8s, 
length=6.5min each) and 3 task scans (TR=0.8s, length=6min each). We used the 
HCP preprocessing pipeline. T1 and T2 images were of high quality. Movement was 
little (typically <1% censored volume by FD_Power < 0.5mm, DVARS Dips looked 
fine too). We used the standard classifier in FIX.

In a multi-run ICA-FIX that concatenated two rfMRI scans (Analysis A):
average number of total components: 135.4 ± 34.3;
average number of signal components: 11.9 ± 4;
average percent of noise components: 90.8 ± 3.6%.

In a multi-run that concatenated two fMRI scans and three task scans (Analysis 
B):
average number of total components: 275.2 ± 56.3;

average number of signal components: 16.5 ± 4.9;
average percent of noise components: 93.8 ± 2.1%.

I wasn't able to find any references of previous multirun ICA-FIX results. I 
looked at 20 subjects in the original HCP, with single-session ICA-FIX for one 
15-min rfMRI run:
average number of total components: 102 ± 29;
average number of signal components: 17.7 ± 4.2;
average percent of noise components was 81 ± 6%.

Comparing my Analysis A and the original HCP results, it seemed that I had 
significantly more total components, significantly fewer signal components, and 
significantly higher percentage of noise components.

I have not gone through my components to determine the accuracy of the 
classfication - that is beyond my expertise at this moment.

My question is: do these numbers match with your experience with multi-run 
ICA-FIX, or do they raise concern about the classfication process?

Thank you very much,
Cherry

On Mon, Apr 22, 2019 at 9:14 PM Yizhou Ma 
mailto:maxxx...@umn.edu>> wrote:
Great that's quick!

On Mon, Apr 22, 2019 at 9:09 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
A few weeks maybe if you want a pre-release version.

Matt.

From: Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 9:06 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

Thank you Matt. This is really helpful. Any idea when the new classifier you 
mentioned in 1. will be available?

On Mon, Apr 22, 2019 at 8:50 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
I guess I haven’t been in the habit of throwing out data like this.  Things I 
would consider would include:

  1.  MR+FIX classification accuracy (if runs were poorly classified, they 
won’t be denoised well).  I’ll note that we are training an improved MR+FIX 
classifier using a combination of HCP-YA resting state (single run FIX), HCP-YA 
task (MR+FIX), and HCP Lifespan (MR+FIX) to address classification issues we 
have observed with very large numbers of components, subject with very large 
amounts of motion, and other artifacts that were not a part of the HCP-YA 
original training data.
  2.  Unusually small numbers of signal components (though note we found a 
recent subtle bug whereby if melodic does not finish mixture modeling 
components, FIX will fail to classify signal components correctly).  If there 
are few signal components this means that either the SNR is very bad or the 
structured noise has overwhelmed the signal and mixed in too much with the 
signal, making it hard to separate.
  3.  DVARS Spikes above baseline (not dips below baseline) in the cleaned 
timeseries suggest residual noise.  I prefer DVARS derived measures to movement 
tracer derived measures because they tell you something about what is actually 
happening to the intensities inside the data, whereas movement tracers may be 
inaccurate reflections of signal intensity fluctuations for a variety of 
reasons (see Glasser et al 2018 Neuroimage: 
https://www.sciencedirect.com/science/article/pii/S1053811918303963 for 
examples).
Others in the HCP used different means to identify some of the noise components 
I mentioned above that weren’t being classified correctly by regular FIX, and 
might be able to share their suggestions.

Matt.

From: Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 8:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanc

Re: [HCP-Users] film_gls error pinv() svd()

2019-06-17 Thread Glasser, Matthew
Is there anything weird about your design like empty EVs?

Matt.

From:  on behalf of "Harms, Michael" 

Date: Monday, June 17, 2019 at 12:55 PM
To: Moataz Assem , 
"hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] film_gls error pinv() svd()


Hmmm.  Assuming the problem is reproducible, I think you’ll have to report 
this, to the FSL list.  To provide additional information, it might be helpful 
to hack the TaskfMRILevel1.sh script to not use the ‘--sa --ms=15 --epith=5’ 
flags in the call to ‘film_gls’ (or, perhaps easier, just try running the 
modified film_gls call directly from the command line).  And if that still 
fails, try turning off the autocorrelation estimation entirely with the 
‘--noest' flag.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From:  on behalf of Moataz Assem 

Date: Monday, June 17, 2019 at 9:32 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] film_gls error pinv() svd()

Hi,

I get the following fsl error related to the beta estimations (while running 
film_gls):

Prewhitening and Computing PEs...
Percentage done:
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,
error: pinv(): svd failed

pinv(): svd failed

This only happens for one run (in one subject) and while estimating the left 
surface (subcortical estimation ran fine). The design matrix doesn’t contain 
any NaNs neither does the timeseries for that hemisphere (also std(timeseries) 
did not give any zeros).

I am wondering if you can suggest other problems to check for?

Thanks

Moataz



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Training data set for multi-run fix

2019-06-14 Thread Glasser, Matthew
I am happy to share the new training file on request as a beta.  Once we’ve got 
a good look at the recently cleaned HCP-YA task fMRI data and haven’t found any 
issues, I think it would be okay to release generally.

Matt.

From:  on behalf of "Harms, Michael" 

Date: Friday, June 14, 2019 at 1:25 PM
To: ASHISH SAHIB , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] Training data set for multi-run fix


Hi,
I’m not sure if the new training file is ready for public release yet, but just 
to clarify, to clean short fMRI runs, you really need to use multi-run FIX to 
get better separation of the signal/noise components.  The new training file 
will help around the margins, but using MR-FIX is critical.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From:  on behalf of ASHISH SAHIB 

Date: Friday, June 14, 2019 at 12:04 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Training data set for multi-run fix

Hello
 We are an HCP disease connectome site.
At the HCP-Investigators meeting it was mentioned that a new training data set 
for multi-run fix would be available that could be used to clean short (~6 
mins) fMRI runs. If the training data for FIX is already available, could 
anyone provide the necessary link to download this. We are in the process of 
running the preprocessing for our fMRI data and it would be of great help if we 
could have the new training data, or know if the previous  HCP_hp2000.RData is 
equally good enough to perform the cleaning.


Thanks
Ashish Sahib

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] -volume-label-import

2019-06-14 Thread Glasser, Matthew
What are the input names?

Matt.

From:  on behalf of "Sanchez, Juan 
(NYSPI)" 
Date: Friday, June 14, 2019 at 12:34 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] -volume-label-import


Dear HCP community

I am trying to create a labeled volume that will work with 
-cifti-create-dense-from-template

The volume data has over 100 ROI's and the cifti create function only imports 
specified labels



 CORTEX_LEFT CORTEX_RIGHT CEREBELLUM ACCUMBENS_LEFT ACCUMBENS_RIGHT 
ALL_GREY_MATTER ALL_WHITE_MATTER AMYGDALA_LEFT AMYGDALA_RIGHT BRAIN_STEM 
CAUDATE_LEFT CAUDATE_RIGHT CEREBELLAR_WHITE_MATTER_LEFT 
CEREBELLAR_WHITE_MATTER_RIGHT CEREBELLUM_LEFT CEREBELLUM_RIGHT 
CEREBRAL_WHITE_MATTER_LEFT CEREBRAL_WHITE_MATTER_RIGHT CORTEX 
DIENCEPHALON_VENTRAL_LEFT DIENCEPHALON_VENTRAL_RIGHT HIPPOCAMPUS_LEFT 
HIPPOCAMPUS_RIGHT INVALID OTHER OTHER_GREY_MATTER OTHER_WHITE_MATTER 
PALLIDUM_LEFT PALLIDUM_RIGHT PUTAMEN_LEFT PUTAMEN_RIGHT THALAMUS_LEFT 
THALAMUS_RIGHT



I need to import all of the labeled ROI's values from the nii into the 
subortical cifti. I tried labeling all of the ROI's with OTHER and name 
collision in the input names did not allow that to work.



Does anyone know how to solve this or a work around

Thanks so much

J



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Myelin values from white matter

2019-06-14 Thread Glasser, Matthew
You would need to do things like that with the T1w/T2w ratio volume.  I’ll note 
that the relationship between myelin and T1w/T2w is less well characterized in 
white matter.

Matt.

From:  on behalf of Antonin Skoch 

Reply-To: Antonin Skoch 
Date: Wednesday, June 12, 2019 at 8:27 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Myelin values from white matter

Dear experts,

I was using

s=my_subject_ID
atlas=HCP_S1200_GroupAvg_v1/Gordon333.32k_fs_LR.dlabel.ni
wb_command -cifti-parcellate 
$p/MNINonLinear/fsaverage_LR32k/${s}.MyelinMap_BC.32k_fs_LR.dscalar.nii $atlas 
COLUMN ${s}.MyelinMap_BC.pscalar.nii -method MEAN
wb_command -cifti-convert -to-text ${s}.MyelinMap_BC.pscalar.nii 
${s}.MyelinMap_BC.txt

to obtain average myelin content from each parcel in Gordon333 atlas.

I would now need to obtain myelin map values from each parcel of FreeSurfer's 
wmparc and also global value of "myelin" of all white matter.

How can I achieve that?

Thank you very much in advance,

Antonin Skoch


___
HCP-Users mailing list
HCP-Users@humanconnecto me.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Corrupt Input Data

2019-06-11 Thread Glasser, Matthew
dcm2bids is not a part of the HCP Pipelines and this seems like a ABCD data 
corruption issue.

Matt.

From:  on behalf of Eric Earl 

Date: Tuesday, June 11, 2019 at 5:59 PM
To: NIMH Data Archive Help Desk 
Cc: Damien Fair , HCPhelp 
Subject: Re: [HCP-Users] Corrupt Input Data

Sveta,

Anders meant the ABCD baseline year 1 data.

~Eric

[OHSU email via VMware Boxer]

On Jun 11, 2019 7:52 AM, NIMH Data Archive Help Desk  
wrote:
## In replies all text above this line is added to the ticket ##

You are registered as a cc on this help desk request and are thus receiving 
email notifications on all updates to the request.
Reply to this email to add a comment to the request.

Sveta Novikova (HELP DESK )

Jun 11, 10:52 AM EDT
Sending this question to HCP support:
I’m finishing up HCP data processing for the year 1 baseline data and I’m left 
with 250 subjects that I am not able to process due to problems with their 
input data. I’ve separated them out into two lists that I attached here.
corrupt_dcms.txt is a list of 138 subjects with at least one corrupt dicom in 
their dataset, which causes dcm2bids to fail. I’ve tried to redownload the tgzs 
but the error persists.
coil_error.txt is a list of 112 subjects that fail dcm2bids conversion 
primarily because the string “Coil Error” is in the series description or there 
is an error with “Slice timing”
I’m wondering if you can take a look at these subjects on your end and see if 
any of these errors can be fixed. My only other solution is to exclude them 
from processing, but often this means excluding the entire subject, which I am 
trying hard not to do.
Please let me know if there’s anything that requires further clarification.

NDA Help Desk


Perronea

Jun 10, 7:05 PM EDT

NDA,

I’m finishing up HCP data processing for the year 1 baseline data and I’m left 
with 250 subjects that I am not able to process due to problems with their 
input data. I’ve separated them out into two lists that I attached here.

corrupt_dcms.txt is a list of 138 subjects with at least one corrupt dicom in 
their dataset, which causes dcm2bids to fail. I’ve tried to redownload the tgzs 
but the error persists.

coil_error.txt is a list of 112 subjects that fail dcm2bids conversion 
primarily because the string “Coil Error” is in the series description or there 
is an error with “Slice timing”

I’m wondering if you can take a look at these subjects on your end and see if 
any of these errors can be fixed. My only other solution is to exclude them 
from processing, but often this means excluding the entire subject, which I am 
trying hard not to do.

Please let me know if there’s anything that requires further clarification.

Thank you,

Anders Perrone

Anders Perrone,

Research Assistant II

Fair Neuroimaging Lab

perro...@ohsu.edu

503-418-1897

Oregon Health & Science University

Mail code:L470

3181 SW Sam Jackson Park Road

Portland, Oregon 97239-3098

Attachment(s)
image001.png
corrupt_dcms.txt
coil_error.txt
'
This email is a service from HELP DESK . Delivered by 
Zendesk
[Z36X8Q-82O2]

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] A question about the usage of Connectome Workbench

2019-06-08 Thread Glasser, Matthew
You can set number of rows, columns, and the space between slices.  Then you 
can set the midpoint of the slices using the numbers next to the P, C, or A 
buttons.

Matt.

From:  on behalf of Aaron C 

Date: Sunday, June 9, 2019 at 12:46 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] A question about the usage of Connectome Workbench

Dear HCP experts,

I have a question about the usage of Connectome Workbench. In the "Volume" tab, 
when using "Montage", how could I control the range of slices? For example, if 
I view multiple parasagittal slices (by clicking the "P" button), how to vary X 
values? Thank you.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Cluster threshold by wb_command -metric-find-clusters

2019-06-08 Thread Glasser, Matthew
Looks like you can use wb_command -cifti-find-clusters with those 4 files.  
Metric refers to a gifti file.

Matt.

From:  on behalf of Zaixu Cui 

Date: Saturday, June 8, 2019 at 11:53 PM
To: "HCP-Users@humanconnectome.org" 
Subject: [HCP-Users] Cluster threshold by wb_command -metric-find-clusters

Hi developers,

I am trying to do cluster threshold by using command wb_command 
-metric-find-clusters.

Thee manual is as follows:

   wb_command -metric-find-clusters

   - the surface to compute on

   - the input metric

   - threshold for data values

   - threshold for cluster area, in mm^2

   - output - the output metric

I have metric file with cifti format (.dscalar.nii). I wonder what should be 
the surface file. I am working on the fsaverage5 space and have .surf.gii and 
.shape.gii files.

fsaverage5.L.midthickness_va_avg.10k_fsavg_L.shape.gii

fsaverage5.R.midthickness_va_avg.10k_fsavg_R.shape.gii

fsaverage5_std_sphere.L.10k_fsavg_L.surf.gii

fsaverage5_std_sphere.R.10k_fsavg_R.surf.gii

Can I use these four files as surface input of this command? And I only have 
the surface to each hemisphere. Do I need to transform them into cfiti file? 
Could you remind which command can be used to convert the surface file into 
cifti.

Thank you so much for the help always. Workbench is really an awesome software, 
which helps me a lot on my project.

Best wishes
-
Zaixu



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Strange output when running ./FreeSurferPipelineBatch.sh

2019-06-08 Thread Glasser, Matthew
That is a normal, if a somewhat ominous informational message.

Matt.

From:  on behalf of Kavinash Loganathan 

Date: Saturday, June 8, 2019 at 2:39 AM
To: "HCP-Users@humanconnectome.org" 
Subject: [HCP-Users] Strange output when running ./FreeSurferPipelineBatch.sh

Hi guys, I finished the PreFreesurfer run on the example file and just started 
Freesurfer pipeline when I got this output:

This script must be SOURCED to correctly setup the environment prior to running 
any of the other HCP scripts contained here

100307
About to use fsl_sub to queue or run 
/home/kavinash/projects/Pipelines/FreeSurfer/FreeSurferPipeline.sh
set -- --subject=100307   
--subjectDIR=/home/kavinash/projects/Pipelines_ExampleData/100307/T1w   
--t1=/home/kavinash/projects/Pipelines_ExampleData/100307/T1w/T1w_acpc_dc_restore.nii.gz
   
--t1brain=/home/kavinash/projects/Pipelines_ExampleData/100307/T1w/T1w_acpc_dc_restore_brain.nii.gz
   
--t2=/home/kavinash/projects/Pipelines_ExampleData/100307/T1w/T2w_acpc_dc_restore.nii.gz
   --printcom=
. /home/kavinash/projects/Pipelines/Examples/Scripts/SetUpHCPPipeline.sh

What does this mean? I think I sourced the correctly (its much the same as the 
Prefreesurfer file locations), but it doesn't openly say what error occurred.

Attached the output file for the above run.

Thanks in advance

Kavinash


[Image removed by sender.]

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] RE-POSTING: Duplicate subjects in Connectome in a Box?

2019-06-07 Thread Glasser, Matthew
  1.  I don’t know, but perhaps someone from NRG can answer.
  2.  Those are different smoothing levels.  I do not recommend using more than 
s4 and personally don’t use anything but s2.  You can see Coalson et al 2018 
PNAS: https://www.pnas.org/content/115/27/E6356.short for the deleterious 
effects of spatial smoothing, which are worst in the volume in 3D, but still 
problematic on the surface when large kernels are used.

Matt.

From:  on behalf of "Fales, Christina L" 

Date: Friday, June 7, 2019 at 5:40 PM
To: "hcp-users@humanconnectome.org" 
Cc: "Fales, Christina L" 
Subject: [HCP-Users] RE-POSTING: Duplicate subjects in Connectome in a Box?

Hi HCP gurus:

I’m reposting the following question(s) in hopes that one of you knows the 
answer.

(1)
On looking through the data, it appears that there is duplication between the 
drives delivered. Is that intentional? We received 12 drives, but four of them 
appear to be duplicates. Specifically, in the pairs below, every subject on the 
first drive (eg, “sde1”) also occurs on the second (“sdi1”).  Files look the 
same (ie, have same file size). What is the difference between the people on 
corresponding drives in each pair?

sde1, sdi1
sdd1, sdh1
sdc1, sdg1
sdf1, sdk1

Subjects on drives sdb1, sdj1, sdbl1, and sdbm1 appear to be unique.

(2)
I cannot find anywhere an explanation of the differences between 
“analysis_s12”, “analysis_s8”, “analysis_s4”, and “analysis_s2”. What is the 
difference between these?

Thanks very much….
-Christina Fales

Christina Fales, PhD
Research Scientist
Division of Psychiatry Research
Zucker Hillside Hospital
Feinstein Institute for Medical Research
Glen Oaks, NY 11004

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

The information contained in this electronic e-mail transmission and any 
attachments are intended only for the use of the individual or entity to whom 
or to which it is addressed, and may contain information that is privileged, 
confidential and exempt from disclosure under applicable law. If the reader of 
this communication is not the intended recipient, or the employee or agent 
responsible for delivering this communication to the intended recipient, you 
are hereby notified that any dissemination, distribution, copying or disclosure 
of this communication and any attachment is strictly prohibited. If you have 
received this transmission in error, please notify the sender immediately by 
telephone and electronic mail, and delete the original communication and any 
attachment from any computer, server or other electronic recording or storage 
device or medium. Receipt by anyone other than the intended recipient is not a 
waiver of any attorney-client, physician-patient or other privilege.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] wb_command -cifti-gradient

2019-06-04 Thread Glasser, Matthew
You can use the midthickness surfaces .surf.gii in the same folder.

Matt.

From: 秦键 
Date: Tuesday, June 4, 2019 at 10:02 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] wb_command -cifti-gradient

A 32k fs_LR surface file with suffix of dscalar.nii, and a 
thickness.32k_fs_LR.dscalar.nii can be an example.


[Image removed by sender.]
秦键
邮箱:qinjian...@126.com

签名由 网易邮箱大师<https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
On 06/05/2019 10:36, Glasser, Matthew<mailto:glass...@wustl.edu> wrote:
What file did you try to run this on?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of 秦键 mailto:qinjian...@126.com>>
Date: Tuesday, June 4, 2019 at 9:32 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] wb_command -cifti-gradient


Dear professors,
When I used the wb_command -cifti-gradient for fs_LR 32k cifti file, I was 
asked to input the left and right surface files, where can I find the left and 
right surface files and what are the format of them?  Can I have an example of 
the use of the wb_command -cifti-gradient?
Thank you and best wishes!







___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] wb_command -cifti-gradient

2019-06-04 Thread Glasser, Matthew
What file did you try to run this on?

Matt.

From:  on behalf of 秦键 

Date: Tuesday, June 4, 2019 at 9:32 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] wb_command -cifti-gradient


Dear professors,
When I used the wb_command -cifti-gradient for fs_LR 32k cifti file, I was 
asked to input the left and right surface files, where can I find the left and 
right surface files and what are the format of them?  Can I have an example of 
the use of the wb_command -cifti-gradient?
Thank you and best wishes!






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] pdconn analysis problems

2019-06-03 Thread Glasser, Matthew
Each row or column (the shorter dimension) will be a dense map for connectivity 
to a parcel.

Matt.

From: Joseph Orr 
Date: Monday, June 3, 2019 at 8:03 PM
To: "Glasser, Matthew" 
Cc: HCP Users 
Subject: Re: [HCP-Users] pdconn analysis problems

I wanted to contrast the connectivity of the different parcels in order to look 
for evidence of gradients in networks. I'll try it in matlab.
--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Mon, Jun 3, 2019 at 7:58 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
Are you wanting to view the files?  You could probably translate the file into 
a .dscalar.nii using matlab.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Joseph Orr mailto:joseph@tamu.edu>>
Date: Monday, June 3, 2019 at 7:55 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] pdconn analysis problems

Thanks Tim, running cifti-transpose and separating on COLUMN solved both 
issues. Is there a way to separate the different parcels that make up a pdconn 
so that I can compare the connectivity maps between parcels?

Thanks,
Joe

--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Mon, Jun 3, 2019 at 5:14 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
"Segmentation fault" is a computer term about invalid memory access, not 
related to the neuroscience term of segmentation.  Due to using the wrong 
variable while copying map names, this command can crash when using ROW when 
the rows are longer than the columns.  You can get around it by transposing and 
separating with COLUMN instead.  This will be fixed in the next release.

I don't know if this is also the reason palm was crashing.  We could make a 
bleeding edge build available if you want to test it.

Tim


On Sat, Jun 1, 2019 at 2:17 PM Joseph Orr 
mailto:joseph@tamu.edu>> wrote:
Sure thing, here's a link to the file. Let me know if there are access problems 
and I can try another sharing via dropbox.
Error! Filename not specified. 
L-ctx_R-CB_crosscorr.pdconn.nii<https://urldefense.proofpoint.com/v2/url?u=https-3A__drive.google.com_a_tamu.edu_file_d_1W9n40sZgLNSn8ODIYf9xqBoAZ5b5C71E_view-3Fusp-3Ddrive-5Fweb=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ZKy1VO33u0kvO-PqY1gpb9Ld-AGhtT8c9PAcpsEyp70=dtQbE_Obfv7WQhtS5EGqWgePsIaI6hU895cKCRVdqX0=jwPQkvc49uGNzjH-ER9ry1R9fgOMJ093SIy8J5g11Nk=>

--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Sat, Jun 1, 2019 at 1:31 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
It sounds like there might be both Workbench and PALM bugs here.  Perhaps you 
could upload the data somewhere (off list if needed), so Tim and Anderson could 
take a look?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Joseph Orr mailto:joseph@tamu.edu>>
Date: Saturday, June 1, 2019 at 1:00 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] pdconn analysis problems

I have a pdconn input (cortical ptseries by subcortical dtseries) that I'd like 
to analyze with PALM, but I'm having some trouble. I only found one old post 
related to this, but the only suggestion was to use -transpose data flag in 
palm. When palm tries to read in the pdconn, I get an error "Undefined function 
or variable 'Y'". The command line output is below. I tried with the data 
transposed and not, but I get the same error. I tried to separate the pdconn to 
just the volume, but this yielded a segmentation error: ($ wb_command 
-cifti-separate input.pdconn.nii ROW -volume-all test
/Applications/workbench/bin_macosx64/wb_command: line 14:  1248 Segmentation 
fault: 11  
"$directory"/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command "$@").

Are there any additional commands I can run on a pdconn to separate each parcel 
and have a series of dconn files? I'd be interested in doing this in order to 
compare the dense connectivity maps for different parcels.

Thanks!
Joe

Command line output for palm
Running PALM alpha115 using MATLAB 9.5.0.1067069 (R2018b) Update 4 with the 
following options:
-i input.pdconn.nii
-transposedata
-o palm
-d design.mat
-t design.con
-T
Found FSL in /usr/local/fsl
Found FreeSurfer in /Applications/freesurfer
Found HCP Workbench executable in 
/Applications/workbench/bin_macosx64/wb_command
Reading input 1/1: input.pdconn.nii
Error using palm_ready 
(/Users/josephorr/Documents/MATLAB/palm-alpha115/palm_ready.m:141)
Undefined function or variable 'Y'



--
Joseph M. Orr, Ph.D.
Assistant Professor
Depa

Re: [HCP-Users] pdconn analysis problems

2019-06-03 Thread Glasser, Matthew
Are you wanting to view the files?  You could probably translate the file into 
a .dscalar.nii using matlab.

Matt.

From:  on behalf of Joseph Orr 

Date: Monday, June 3, 2019 at 7:55 PM
To: HCP Users 
Subject: Re: [HCP-Users] pdconn analysis problems

Thanks Tim, running cifti-transpose and separating on COLUMN solved both 
issues. Is there a way to separate the different parcels that make up a pdconn 
so that I can compare the connectivity maps between parcels?

Thanks,
Joe

--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Mon, Jun 3, 2019 at 5:14 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
"Segmentation fault" is a computer term about invalid memory access, not 
related to the neuroscience term of segmentation.  Due to using the wrong 
variable while copying map names, this command can crash when using ROW when 
the rows are longer than the columns.  You can get around it by transposing and 
separating with COLUMN instead.  This will be fixed in the next release.

I don't know if this is also the reason palm was crashing.  We could make a 
bleeding edge build available if you want to test it.

Tim


On Sat, Jun 1, 2019 at 2:17 PM Joseph Orr 
mailto:joseph@tamu.edu>> wrote:
Sure thing, here's a link to the file. Let me know if there are access problems 
and I can try another sharing via dropbox.
[Image removed by sender.] 
L-ctx_R-CB_crosscorr.pdconn.nii<https://urldefense.proofpoint.com/v2/url?u=https-3A__drive.google.com_a_tamu.edu_file_d_1W9n40sZgLNSn8ODIYf9xqBoAZ5b5C71E_view-3Fusp-3Ddrive-5Fweb=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ZKy1VO33u0kvO-PqY1gpb9Ld-AGhtT8c9PAcpsEyp70=dtQbE_Obfv7WQhtS5EGqWgePsIaI6hU895cKCRVdqX0=jwPQkvc49uGNzjH-ER9ry1R9fgOMJ093SIy8J5g11Nk=>

--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Sat, Jun 1, 2019 at 1:31 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
It sounds like there might be both Workbench and PALM bugs here.  Perhaps you 
could upload the data somewhere (off list if needed), so Tim and Anderson could 
take a look?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Joseph Orr mailto:joseph@tamu.edu>>
Date: Saturday, June 1, 2019 at 1:00 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] pdconn analysis problems

I have a pdconn input (cortical ptseries by subcortical dtseries) that I'd like 
to analyze with PALM, but I'm having some trouble. I only found one old post 
related to this, but the only suggestion was to use -transpose data flag in 
palm. When palm tries to read in the pdconn, I get an error "Undefined function 
or variable 'Y'". The command line output is below. I tried with the data 
transposed and not, but I get the same error. I tried to separate the pdconn to 
just the volume, but this yielded a segmentation error: ($ wb_command 
-cifti-separate input.pdconn.nii ROW -volume-all test
/Applications/workbench/bin_macosx64/wb_command: line 14:  1248 Segmentation 
fault: 11  
"$directory"/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command "$@").

Are there any additional commands I can run on a pdconn to separate each parcel 
and have a series of dconn files? I'd be interested in doing this in order to 
compare the dense connectivity maps for different parcels.

Thanks!
Joe

Command line output for palm
Running PALM alpha115 using MATLAB 9.5.0.1067069 (R2018b) Update 4 with the 
following options:
-i input.pdconn.nii
-transposedata
-o palm
-d design.mat
-t design.con
-T
Found FSL in /usr/local/fsl
Found FreeSurfer in /Applications/freesurfer
Found HCP Workbench executable in 
/Applications/workbench/bin_macosx64/wb_command
Reading input 1/1: input.pdconn.nii
Error using palm_ready 
(/Users/josephorr/Documents/MATLAB/palm-alpha115/palm_ready.m:141)
Undefined function or variable 'Y'



--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.humanconnectome.org_mailman_listinfo_hcp-2Dusers=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ZKy1VO33u0kvO-PqY1gpb9Ld-AGhtT8c9PAcpsEyp70=dtQbE_Obfv7WQhtS5EGqWgePsIaI6hU895cKCRVdqX0=-sOhlQpS5zUxskE5tEKknxgLJgLb0cuMGQeOr9Krwok=>


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any 

Re: [HCP-Users] MMP parcellation for 7T

2019-06-02 Thread Glasser, Matthew
With the 3T data.

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 8:00 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

I see. In that case, where should I find the 32k meshes for 7T? I only see 59k 
meshes in the 7T directory.

Get Outlook for iOS<https://aka.ms/o0ukef>

____________
From: Glasser, Matthew 
Sent: Sunday, June 2, 2019 8:38 PM
To: Shadi, Kamal; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] MMP parcellation for 7T

If you haven’t run the analysis yet, I would encourage you to just use the 32k 
meshes.  No one has shown the 59k meshes to have a clear benefit and they take 
up a lot more space.

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 7:35 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

I want to run probtrackx using 7T diffusion data and MMP parcellation and since 
7T data has 59k meshes I would like to have ROIs in a same mesh resolution.

Get Outlook for iOS<https://aka.ms/o0ukef>


From: Glasser, Matthew 
Sent: Sunday, June 2, 2019 7:42 PM
To: Shadi, Kamal; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] MMP parcellation for 7T

I take it you are using the experimental 59k surfaces, rather than the 32k 
surfaces for general use?

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 6:40 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

Is there a recommended way to up-sample the MMP ROIs to 7T surfaces?

Thanks,
Kamal

From: "Glasser, Matthew" 
Date: Sunday, June 2, 2019 at 1:35 PM
To: "Shadi, Kamal" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

There isn’t a separate 7T file yet.  We do plan to investigate the parcellation 
with 7T fMRI data.

Matt.

From:  on behalf of "Shadi, Kamal" 

Date: Sunday, June 2, 2019 at 11:32 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] MMP parcellation for 7T

Dear HCP Experts,

Is there a dlabel.nii file containing 180 MMP ROIs per hemisphere for 7T data 
release? I can find the file for 3T release in BALSA 
(Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii) 
but I could not find the similar file for 7T?

Thanks in advance,
Kamal



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MMP parcellation for 7T

2019-06-02 Thread Glasser, Matthew
If you haven’t run the analysis yet, I would encourage you to just use the 32k 
meshes.  No one has shown the 59k meshes to have a clear benefit and they take 
up a lot more space.

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 7:35 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

I want to run probtrackx using 7T diffusion data and MMP parcellation and since 
7T data has 59k meshes I would like to have ROIs in a same mesh resolution.

Get Outlook for iOS<https://aka.ms/o0ukef>

____________
From: Glasser, Matthew 
Sent: Sunday, June 2, 2019 7:42 PM
To: Shadi, Kamal; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] MMP parcellation for 7T

I take it you are using the experimental 59k surfaces, rather than the 32k 
surfaces for general use?

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 6:40 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

Is there a recommended way to up-sample the MMP ROIs to 7T surfaces?

Thanks,
Kamal

From: "Glasser, Matthew" 
Date: Sunday, June 2, 2019 at 1:35 PM
To: "Shadi, Kamal" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

There isn’t a separate 7T file yet.  We do plan to investigate the parcellation 
with 7T fMRI data.

Matt.

From:  on behalf of "Shadi, Kamal" 

Date: Sunday, June 2, 2019 at 11:32 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] MMP parcellation for 7T

Dear HCP Experts,

Is there a dlabel.nii file containing 180 MMP ROIs per hemisphere for 7T data 
release? I can find the file for 3T release in BALSA 
(Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii) 
but I could not find the similar file for 7T?

Thanks in advance,
Kamal



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MMP parcellation for 7T

2019-06-02 Thread Glasser, Matthew
I take it you are using the experimental 59k surfaces, rather than the 32k 
surfaces for general use?

Matt.

From: "Shadi, Kamal" 
Date: Sunday, June 2, 2019 at 6:40 PM
To: "Glasser, Matthew" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

Is there a recommended way to up-sample the MMP ROIs to 7T surfaces?

Thanks,
Kamal

From: "Glasser, Matthew" 
Date: Sunday, June 2, 2019 at 1:35 PM
To: "Shadi, Kamal" , "hcp-users@humanconnectome.org" 

Subject: Re: [HCP-Users] MMP parcellation for 7T

There isn’t a separate 7T file yet.  We do plan to investigate the parcellation 
with 7T fMRI data.

Matt.

From:  on behalf of "Shadi, Kamal" 

Date: Sunday, June 2, 2019 at 11:32 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] MMP parcellation for 7T

Dear HCP Experts,

Is there a dlabel.nii file containing 180 MMP ROIs per hemisphere for 7T data 
release? I can find the file for 3T release in BALSA 
(Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii) 
but I could not find the similar file for 7T?

Thanks in advance,
Kamal



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MMP parcellation for 7T

2019-06-02 Thread Glasser, Matthew
There isn’t a separate 7T file yet.  We do plan to investigate the parcellation 
with 7T fMRI data.

Matt.

From:  on behalf of "Shadi, Kamal" 

Date: Sunday, June 2, 2019 at 11:32 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] MMP parcellation for 7T

Dear HCP Experts,

Is there a dlabel.nii file containing 180 MMP ROIs per hemisphere for 7T data 
release? I can find the file for 3T release in BALSA 
(Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii) 
but I could not find the similar file for 7T?

Thanks in advance,
Kamal



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] pdconn analysis problems

2019-06-01 Thread Glasser, Matthew
It sounds like there might be both Workbench and PALM bugs here.  Perhaps you 
could upload the data somewhere (off list if needed), so Tim and Anderson could 
take a look?

Matt.

From:  on behalf of Joseph Orr 

Date: Saturday, June 1, 2019 at 1:00 PM
To: HCP Users 
Subject: [HCP-Users] pdconn analysis problems

I have a pdconn input (cortical ptseries by subcortical dtseries) that I'd like 
to analyze with PALM, but I'm having some trouble. I only found one old post 
related to this, but the only suggestion was to use -transpose data flag in 
palm. When palm tries to read in the pdconn, I get an error "Undefined function 
or variable 'Y'". The command line output is below. I tried with the data 
transposed and not, but I get the same error. I tried to separate the pdconn to 
just the volume, but this yielded a segmentation error: ($ wb_command 
-cifti-separate input.pdconn.nii ROW -volume-all test
/Applications/workbench/bin_macosx64/wb_command: line 14:  1248 Segmentation 
fault: 11  
"$directory"/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command "$@").

Are there any additional commands I can run on a pdconn to separate each parcel 
and have a series of dconn files? I'd be interested in doing this in order to 
compare the dense connectivity maps for different parcels.

Thanks!
Joe

Command line output for palm
Running PALM alpha115 using MATLAB 9.5.0.1067069 (R2018b) Update 4 with the 
following options:
-i input.pdconn.nii
-transposedata
-o palm
-d design.mat
-t design.con
-T
Found FSL in /usr/local/fsl
Found FreeSurfer in /Applications/freesurfer
Found HCP Workbench executable in 
/Applications/workbench/bin_macosx64/wb_command
Reading input 1/1: input.pdconn.nii
Error using palm_ready 
(/Users/josephorr/Documents/MATLAB/palm-alpha115/palm_ready.m:141)
Undefined function or variable 'Y'



--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Group-averaging of task fMRI data

2019-05-30 Thread Glasser, Matthew
Hi Mike,

My recollection was that the unnamed .dscalar.nii files were zstats, not beta 
maps.  I added the beta maps later when it became clear that using statistical 
significance maps was inappropriate for parcellation.

Matt.

From:  on behalf of Reza Rajimehr 

Date: Thursday, May 30, 2019 at 12:30 PM
To: "Harms, Michael" , Nooshin Abbasi 

Cc: hcp-users 
Subject: Re: [HCP-Users] Group-averaging of task fMRI data

Thanks Michael for your detailed and helpful answers.

Best,
Reza


On Thu, May 30, 2019 at 6:10 PM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

Hi Reza,

1) We’ve already generated Cohen’s d-style effect size maps for all contrasts, 
using all subjects, as part of the “Group Average Dataset” available at 
https://db.humanconnectome.org/data/projects/HCP_1200.  If you need it computed 
for a specific subset of subjects, then yes, you can use the approach that you 
outlined.  Note that the ensuing “effect size” does not account for the family 
structure in the data (i.e., to the extent that the estimate of the std across 
subjects is biased by the family structure, then the estimate of the effect 
size is biased as well).

2) A .dtseries.nii file is still a “spatial map”.  We just didn’t bother to 
formally convert those particular outputs to a .dscalar.nii (e.g., via 
-cifti-change-mapping).  A dscalar version of all the copes (merged into a 
single file) for a given task and subject are available in the root level of 
the .feat directory containing the Level2 task analysis results for that task 
and subject.  In newer pipeline versions, we create separate merged files for 
both the “zstat” and “cope” files of the individual contrasts.  However, at the 
time of the processing of the HCP-YA data, only a single merged dscalar was 
created, and that was for the copes (and it does not unfortunately have “cope” 
as part of its filename).

3) We recommend using PALM for group statistical analysis.  You can find a 
tutorial in the “tfMRI and PALM” practical available as part of the HCP Course: 
https://store.humanconnectome.org/courses/2018/exploring-the-human-connectome.php.
  And no, you generally do *not* want to use the individual subject “zstat1” 
maps as inputs to a statistical computation, which would be “computing 
statistics of a statistic” (rather than the statistic of an effect size).

4) The outputs produced are simply the same as those produced by FSL’s FLAMEO, 
albeit in CIFTI rather than NIFTI format.  So see FSL’s FLAMEO documentation.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid 
Ave.
Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Reza Rajimehr mailto:rajim...@gmail.com>>
Date: Wednesday, May 29, 2019 at 7:36 PM
To: hcp-users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Group-averaging of task fMRI data

Hi,

For a group of subjects (e.g. 100 subjects in HCP S1200), we want to generate a 
group-average Cohen’s d map for a particular contrast in the working memory 
task. For this, we take level2 “cope1.dtseries.nii” file in cope20.feat folder 
of all those subjects, merge them using -cifti-merge, then -cifti-reduce mean, 
-cifti-reduce stdev, and -cifti-math mean/stdev.

Questions:

1) Is the above procedure correct? Or you recommend other commands?

2) Why the file name is cope1.dtseries.nii when it is not a time-series data? 
Why not naming it cope1.dscalar.nii, as it is a spatial map?

3) If we want to generate a group-average zstat map, what should we do? I guess 
it should involve using the “zstat1.dtseries.nii” files, but don’t know how.

4) Is there any documentation somewhere describing all the files within the 
cope folder? E.g. these files:

mean_random_effects_var1.dtseries.nii
pe1.dtseries.nii
res4d.dtseries.nii
tdof_t1.dtseries.nii
tstat1.dtseries.nii
varcope1.dtseries.nii
weights1.dtseries.nii

Thanks,
Reza

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list

Re: [HCP-Users] *WU-SPAM* [suspect] assessing MSMAll quality

2019-05-27 Thread Glasser, Matthew
StrainR values are 2x higher than StrainJ in general.  As for acceptable 
values, keep in mind things like areal distortion/StrainJ are Log2 of the 
number.  That means that +/-2 means 4 fold expansion or contraction.  What is 
acceptable might vary from application to application.  In general, one wants 
accurate alignment without severe distortions for a functional alignment 
(different from a folding alignment where one wants to constrain distortions so 
as to avoid overfitting folding patterns).

Matt.

From: Moataz Assem 
Date: Monday, May 27, 2019 at 8:48 PM
To: "hcp-users@humanconnectome.org" , "Glasser, 
Matthew" 
Subject: Re: *WU-SPAM* [suspect][HCP-Users] assessing MSMAll quality

Thanks Matt. and are acceptable ranges for values of those files less than 2 
also?
Moataz


On Mon, May 27, 2019 at 5:47 AM +0100, "Glasser, Matthew" 
mailto:glass...@wustl.edu>> wrote:
Hi Moataz,

I check the StrainJ (isotropic areal distortion) and StrainR (shape distortion) 
and the overall alignment.  Beyond that, there isn’t currently a quantitative 
way to do this.

Matt.

From:  on behalf of Moataz Assem 

Date: Sunday, May 26, 2019 at 10:03 AM
To: "hcp-users@humanconnectome.org" 
Subject: *WU-SPAM* [suspect][HCP-Users] assessing MSMAll quality

Hi,

What are the recommended quality assessment steps for MSMAll? I currently check 
areal distortion maps (and make sure the values aren’t too large, e.g. 
more/less than 2/-2) and visually compare individual RSN and myelin maps to the 
atlas ones though this involves a bit of subjective judgment and doesn’t take 
into account subjects with genuine atypical topographies. Is there a more 
quantitative approach available?

Thanks

Moataz

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

2019-05-27 Thread Glasser, Matthew
That file does exist in the structural package then.

Matt.

From: Jaime Caballero 
Date: Monday, May 27, 2019 at 3:20 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Sorry for the confusion.

Locally acquired data wasn't processed using the HCP pipelines, it is in a 
different resolution and it was processed in volume space, no problem there.

I want to use Choi's parcellation with HCP data. All the process I described is 
mi workaround to adapt Choi files to HCP files.

Jaime

El lun., 27 may. 2019 22:12, Glasser, Matthew 
mailto:glass...@wustl.edu>> escribió:
I thought you said you were using a locally collected sample you ran the HCP 
Pipelines on?
Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 3:10 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Ok, thank you! I will try that.

I asume the reference image 
${StudyFolder}/${Subject}/MNINonLinear/ROIs/Atlas_ROIs.2.nii.gz is to be 
downloaded with the structural package?

Regards,
Jaime

El lun., 27 may. 2019 a las 21:00, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
To make the .dscalar.nii file, you seem to be on the right track.  If the Choi 
ROIs are properly in MNI space, hopefully you could simply use applywarp 
--interp=nn -i  -r 
${StudyFolder}/${Subject}/MNINonLinear/ROIs/Atlas_ROIs.2.nii.gz -o  
and then the wb_command -cifti-create-dense-from-template you mention.

Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 12:56 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

The objective is to extract functional connectivity between cortical and 
striatal ROIs, and ALFF and ReHo from both cortical and striatal ROIs.

Up to this point I have imported each subject's dtseries.nii file into MATLAB, 
and also the previously defined ROIs in an HCP-compatible format. For the 
dtseries I have a 96854x1200 matrix, and for the ROI a 96854x1 matrix 
containing a mask, which I use to extract the time series I'm interested in for 
further processing.

Jaime


El lun., 27 may. 2019 a las 19:14, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
What do you plan to do with the file?

Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 12:08 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

A .dscalar.nii output, I think. Basically I want an equivalent of the nifti 
ROI, but in cifti: for each voxel/vertex, value 1 if inside the ROI, 0 if 
outside. Would a dlabel file be better for this application?

El lun., 27 may. 2019 a las 19:03, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
Are you wanting a .dlabel.nii output or a .dscalar.nii output?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 10:37 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Dear experts

In my center we are studying cortico-striatal functional connectivity, and 
cortical/striatal local measures (ALFF, ReHo) on a locally acquired sample. For 
that we are using Choi's functional parcellation, distributed as a volumetric 
NIFTI file, in MNI152 space. We want to validate our measures with a subset of 
the S1200 release (resting state, 3T). Specifically I have used the ICA-FIX 
cleaned and MSM-all registered files, i.e.:
/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii

The thing is I have doubts on the way to convert nifti ROIs to cifti format in 
this case.

My first approach:

1. Load a cifti template file and the NIFTI ROI file in MATLAB. The roi file is 
a volume containing 0s and 1s.
2. Use the inverse of the NIFTI's transformation matrix to convert the XYZ 
coordinates in MNI space from each grayordinate to XYZ coordinates in the 
NIFTI'S volume space. (The matrix is corrected to account for MATLAB's 1-based 
matrix indexing)
3. The values of the NIFTI volume for the obtained coordinates define the ROI 
in the output cifti.
4. Keep only the grayordinates that are set to one by my method and that are 
labeled as striatum (i.e. cauda

Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

2019-05-27 Thread Glasser, Matthew
I thought you said you were using a locally collected sample you ran the HCP 
Pipelines on?

Matt.

From: Jaime Caballero 
Date: Monday, May 27, 2019 at 3:10 PM
To: "Glasser, Matthew" 
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Ok, thank you! I will try that.

I asume the reference image 
${StudyFolder}/${Subject}/MNINonLinear/ROIs/Atlas_ROIs.2.nii.gz is to be 
downloaded with the structural package?

Regards,
Jaime

El lun., 27 may. 2019 a las 21:00, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
To make the .dscalar.nii file, you seem to be on the right track.  If the Choi 
ROIs are properly in MNI space, hopefully you could simply use applywarp 
--interp=nn -i  -r 
${StudyFolder}/${Subject}/MNINonLinear/ROIs/Atlas_ROIs.2.nii.gz -o  
and then the wb_command -cifti-create-dense-from-template you mention.

Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 12:56 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

The objective is to extract functional connectivity between cortical and 
striatal ROIs, and ALFF and ReHo from both cortical and striatal ROIs.

Up to this point I have imported each subject's dtseries.nii file into MATLAB, 
and also the previously defined ROIs in an HCP-compatible format. For the 
dtseries I have a 96854x1200 matrix, and for the ROI a 96854x1 matrix 
containing a mask, which I use to extract the time series I'm interested in for 
further processing.

Jaime


El lun., 27 may. 2019 a las 19:14, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
What do you plan to do with the file?

Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 12:08 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

A .dscalar.nii output, I think. Basically I want an equivalent of the nifti 
ROI, but in cifti: for each voxel/vertex, value 1 if inside the ROI, 0 if 
outside. Would a dlabel file be better for this application?

El lun., 27 may. 2019 a las 19:03, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
Are you wanting a .dlabel.nii output or a .dscalar.nii output?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 10:37 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Dear experts

In my center we are studying cortico-striatal functional connectivity, and 
cortical/striatal local measures (ALFF, ReHo) on a locally acquired sample. For 
that we are using Choi's functional parcellation, distributed as a volumetric 
NIFTI file, in MNI152 space. We want to validate our measures with a subset of 
the S1200 release (resting state, 3T). Specifically I have used the ICA-FIX 
cleaned and MSM-all registered files, i.e.:
/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii

The thing is I have doubts on the way to convert nifti ROIs to cifti format in 
this case.

My first approach:

1. Load a cifti template file and the NIFTI ROI file in MATLAB. The roi file is 
a volume containing 0s and 1s.
2. Use the inverse of the NIFTI's transformation matrix to convert the XYZ 
coordinates in MNI space from each grayordinate to XYZ coordinates in the 
NIFTI'S volume space. (The matrix is corrected to account for MATLAB's 1-based 
matrix indexing)
3. The values of the NIFTI volume for the obtained coordinates define the ROI 
in the output cifti.
4. Keep only the grayordinates that are set to one by my method and that are 
labeled as striatum (i.e. caudate, putamen or accumbens) in the cifti files.

This aproach is rather hand-made, and I wonder if I am missing something 
important. Conceptually it looks correct to me, but the ROIs appear slightly 
displaced to the right, which might affect the results.
To load the CIFTIs I use fieltdrip's ft_cifti_read( ), which for this purpose 
works well (it reads the grayordinate positions and labels), and to load the 
NIFTIs I use Freesurfer's load_nifti( ).
Is this method correct, or is there something important I'm not taking into 
account?

Now I'm trying to do the same with wb_command to compare, but I cannot get it 
working. The procedure I use is:

# Resample the ROI file to 2x2x2 resolution with ANTs:
ResampleImage 3 "$NiftiFileIn" "

Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

2019-05-27 Thread Glasser, Matthew
To make the .dscalar.nii file, you seem to be on the right track.  If the Choi 
ROIs are properly in MNI space, hopefully you could simply use applywarp 
--interp=nn -i  -r 
${StudyFolder}/${Subject}/MNINonLinear/ROIs/Atlas_ROIs.2.nii.gz -o  
and then the wb_command -cifti-create-dense-from-template you mention.

Matt.

From: Jaime Caballero 
Date: Monday, May 27, 2019 at 12:56 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

The objective is to extract functional connectivity between cortical and 
striatal ROIs, and ALFF and ReHo from both cortical and striatal ROIs.

Up to this point I have imported each subject's dtseries.nii file into MATLAB, 
and also the previously defined ROIs in an HCP-compatible format. For the 
dtseries I have a 96854x1200 matrix, and for the ROI a 96854x1 matrix 
containing a mask, which I use to extract the time series I'm interested in for 
further processing.

Jaime


El lun., 27 may. 2019 a las 19:14, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
What do you plan to do with the file?

Matt.

From: Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 12:08 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

A .dscalar.nii output, I think. Basically I want an equivalent of the nifti 
ROI, but in cifti: for each voxel/vertex, value 1 if inside the ROI, 0 if 
outside. Would a dlabel file be better for this application?

El lun., 27 may. 2019 a las 19:03, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
Are you wanting a .dlabel.nii output or a .dscalar.nii output?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 10:37 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Dear experts

In my center we are studying cortico-striatal functional connectivity, and 
cortical/striatal local measures (ALFF, ReHo) on a locally acquired sample. For 
that we are using Choi's functional parcellation, distributed as a volumetric 
NIFTI file, in MNI152 space. We want to validate our measures with a subset of 
the S1200 release (resting state, 3T). Specifically I have used the ICA-FIX 
cleaned and MSM-all registered files, i.e.:
/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii

The thing is I have doubts on the way to convert nifti ROIs to cifti format in 
this case.

My first approach:

1. Load a cifti template file and the NIFTI ROI file in MATLAB. The roi file is 
a volume containing 0s and 1s.
2. Use the inverse of the NIFTI's transformation matrix to convert the XYZ 
coordinates in MNI space from each grayordinate to XYZ coordinates in the 
NIFTI'S volume space. (The matrix is corrected to account for MATLAB's 1-based 
matrix indexing)
3. The values of the NIFTI volume for the obtained coordinates define the ROI 
in the output cifti.
4. Keep only the grayordinates that are set to one by my method and that are 
labeled as striatum (i.e. caudate, putamen or accumbens) in the cifti files.

This aproach is rather hand-made, and I wonder if I am missing something 
important. Conceptually it looks correct to me, but the ROIs appear slightly 
displaced to the right, which might affect the results.
To load the CIFTIs I use fieltdrip's ft_cifti_read( ), which for this purpose 
works well (it reads the grayordinate positions and labels), and to load the 
NIFTIs I use Freesurfer's load_nifti( ).
Is this method correct, or is there something important I'm not taking into 
account?

Now I'm trying to do the same with wb_command to compare, but I cannot get it 
working. The procedure I use is:

# Resample the ROI file to 2x2x2 resolution with ANTs:
ResampleImage 3 "$NiftiFileIn" "$NiftiFileResampled" 2x2x2 0
# Convert the obtained nifti to cifti:
wb_command -cifti-create-dense-from-template "$CiftiDscalarTemplateFile" 
"$CiftiOut" -volume-all "$NiftiFileResampled"

Which outputs the error: -volume-all specifies a volume file that doesn't match 
the volume space of the template cifti file
Is there anything wrong with my way of doing this procedure?

Thanks in advance,

Best regards,
Jaime Caballero-Insaurriaga

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The mate

Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

2019-05-27 Thread Glasser, Matthew
What do you plan to do with the file?

Matt.

From: Jaime Caballero 
Date: Monday, May 27, 2019 at 12:08 PM
To: "Glasser, Matthew" 
Cc: "hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

A .dscalar.nii output, I think. Basically I want an equivalent of the nifti 
ROI, but in cifti: for each voxel/vertex, value 1 if inside the ROI, 0 if 
outside. Would a dlabel file be better for this application?

El lun., 27 may. 2019 a las 19:03, Glasser, Matthew 
(mailto:glass...@wustl.edu>>) escribió:
Are you wanting a .dlabel.nii output or a .dscalar.nii output?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Jaime Caballero mailto:jcabai...@gmail.com>>
Date: Monday, May 27, 2019 at 10:37 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Dear experts

In my center we are studying cortico-striatal functional connectivity, and 
cortical/striatal local measures (ALFF, ReHo) on a locally acquired sample. For 
that we are using Choi's functional parcellation, distributed as a volumetric 
NIFTI file, in MNI152 space. We want to validate our measures with a subset of 
the S1200 release (resting state, 3T). Specifically I have used the ICA-FIX 
cleaned and MSM-all registered files, i.e.:
/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii

The thing is I have doubts on the way to convert nifti ROIs to cifti format in 
this case.

My first approach:

1. Load a cifti template file and the NIFTI ROI file in MATLAB. The roi file is 
a volume containing 0s and 1s.
2. Use the inverse of the NIFTI's transformation matrix to convert the XYZ 
coordinates in MNI space from each grayordinate to XYZ coordinates in the 
NIFTI'S volume space. (The matrix is corrected to account for MATLAB's 1-based 
matrix indexing)
3. The values of the NIFTI volume for the obtained coordinates define the ROI 
in the output cifti.
4. Keep only the grayordinates that are set to one by my method and that are 
labeled as striatum (i.e. caudate, putamen or accumbens) in the cifti files.

This aproach is rather hand-made, and I wonder if I am missing something 
important. Conceptually it looks correct to me, but the ROIs appear slightly 
displaced to the right, which might affect the results.
To load the CIFTIs I use fieltdrip's ft_cifti_read( ), which for this purpose 
works well (it reads the grayordinate positions and labels), and to load the 
NIFTIs I use Freesurfer's load_nifti( ).
Is this method correct, or is there something important I'm not taking into 
account?

Now I'm trying to do the same with wb_command to compare, but I cannot get it 
working. The procedure I use is:

# Resample the ROI file to 2x2x2 resolution with ANTs:
ResampleImage 3 "$NiftiFileIn" "$NiftiFileResampled" 2x2x2 0
# Convert the obtained nifti to cifti:
wb_command -cifti-create-dense-from-template "$CiftiDscalarTemplateFile" 
"$CiftiOut" -volume-all "$NiftiFileResampled"

Which outputs the error: -volume-all specifies a volume file that doesn't match 
the volume space of the template cifti file
Is there anything wrong with my way of doing this procedure?

Thanks in advance,

Best regards,
Jaime Caballero-Insaurriaga

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

2019-05-27 Thread Glasser, Matthew
Are you wanting a .dlabel.nii output or a .dscalar.nii output?

Matt.

From:  on behalf of Jaime Caballero 

Date: Monday, May 27, 2019 at 10:37 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Convert nifti-ROIs to cifti format (subcortical)

Dear experts

In my center we are studying cortico-striatal functional connectivity, and 
cortical/striatal local measures (ALFF, ReHo) on a locally acquired sample. For 
that we are using Choi's functional parcellation, distributed as a volumetric 
NIFTI file, in MNI152 space. We want to validate our measures with a subset of 
the S1200 release (resting state, 3T). Specifically I have used the ICA-FIX 
cleaned and MSM-all registered files, i.e.:
/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii

The thing is I have doubts on the way to convert nifti ROIs to cifti format in 
this case.

My first approach:

1. Load a cifti template file and the NIFTI ROI file in MATLAB. The roi file is 
a volume containing 0s and 1s.
2. Use the inverse of the NIFTI's transformation matrix to convert the XYZ 
coordinates in MNI space from each grayordinate to XYZ coordinates in the 
NIFTI'S volume space. (The matrix is corrected to account for MATLAB's 1-based 
matrix indexing)
3. The values of the NIFTI volume for the obtained coordinates define the ROI 
in the output cifti.
4. Keep only the grayordinates that are set to one by my method and that are 
labeled as striatum (i.e. caudate, putamen or accumbens) in the cifti files.

This aproach is rather hand-made, and I wonder if I am missing something 
important. Conceptually it looks correct to me, but the ROIs appear slightly 
displaced to the right, which might affect the results.
To load the CIFTIs I use fieltdrip's ft_cifti_read( ), which for this purpose 
works well (it reads the grayordinate positions and labels), and to load the 
NIFTIs I use Freesurfer's load_nifti( ).
Is this method correct, or is there something important I'm not taking into 
account?

Now I'm trying to do the same with wb_command to compare, but I cannot get it 
working. The procedure I use is:

# Resample the ROI file to 2x2x2 resolution with ANTs:
ResampleImage 3 "$NiftiFileIn" "$NiftiFileResampled" 2x2x2 0
# Convert the obtained nifti to cifti:
wb_command -cifti-create-dense-from-template "$CiftiDscalarTemplateFile" 
"$CiftiOut" -volume-all "$NiftiFileResampled"

Which outputs the error: -volume-all specifies a volume file that doesn't match 
the volume space of the template cifti file
Is there anything wrong with my way of doing this procedure?

Thanks in advance,

Best regards,
Jaime Caballero-Insaurriaga

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DeDriftAndResamplePipelineBatch.sh

2019-05-27 Thread Glasser, Matthew
Yes.

Matt.

From:  on behalf of Marta Moreno 

Date: Monday, May 27, 2019 at 11:15 AM
To: HCP Users 
Subject: [HCP-Users] DeDriftAndResamplePipelineBatch.sh

Dear Experts,

If I run MR ICAFIX regressing motion parameters as part of the cleaning, do I 
need to set DeDriftAndResamplePipelineBatch.sh as MotionRegression=TRUE?

Thanks,

Leah.



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] *WU-SPAM* [suspect] assessing MSMAll quality

2019-05-26 Thread Glasser, Matthew
Hi Moataz,

I check the StrainJ (isotropic areal distortion) and StrainR (shape distortion) 
and the overall alignment.  Beyond that, there isn’t currently a quantitative 
way to do this.

Matt.

From:  on behalf of Moataz Assem 

Date: Sunday, May 26, 2019 at 10:03 AM
To: "hcp-users@humanconnectome.org" 
Subject: *WU-SPAM* [suspect][HCP-Users] assessing MSMAll quality

Hi,

What are the recommended quality assessment steps for MSMAll? I currently check 
areal distortion maps (and make sure the values aren’t too large, e.g. 
more/less than 2/-2) and visually compare individual RSN and myelin maps to the 
atlas ones though this involves a bit of subjective judgment and doesn’t take 
into account subjects with genuine atypical topographies. Is there a more 
quantitative approach available?

Thanks

Moataz

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Error while running "GenericfMRIVolumeProcessingPipeline.sh"

2019-05-22 Thread Glasser, Matthew
There could be a bug here related to new FSL 6.0+ fslmaths behavior.  Please 
try modifying:

https://github.com/Washington-University/HCPpipelines/blob/master/global/scripts/TopupPreprocessingAll.sh

line 189 to:

${FSLDIR}/bin/fslmaths ${WD}/PhaseOne_mask_gdc -mas ${WD}/PhaseTwo_mask_gdc 
-ero -bin -Tmin ${WD}/Mask

Matt.

From:  on behalf of Simon Wein 

Date: Wednesday, May 22, 2019 at 8:28 AM
To: Timothy Coalson 
Cc: Wilhelm Malloni , 
"hcp-users@humanconnectome.org" 
Subject: Re: [HCP-Users] Error while running 
"GenericfMRIVolumeProcessingPipeline.sh"

Thank you very much for your suggestion.
We already use FSL 6.0.1 (and Freesurfer 6.0.0), sorry for my inaccuracy.
The output of "fslhd ${WD}/BothPhases" is:

filenameBothPhases.nii.gz
size of header348
data_typeINT32
dim04
dim1104
dim2104
dim372
dim46
dim51
dim61
dim71
vox_unitsmm
time_unitss
datatype8
nbyper4
bitpix32
pixdim01.00
pixdim12.00
pixdim22.00
pixdim32.00
pixdim47.70
pixdim50.00
pixdim60.00
pixdim70.00
vox_offset352
cal_max0.00
cal_min0.00
scl_slope1.00
scl_inter0.00
phase_dim0
freq_dim0
slice_dim0
slice_nameUnknown
slice_code0
slice_start0
slice_end0
slice_duration0.00
toffset0.00
intentUnknown
intent_code0
intent_name
intent_p10.00
intent_p20.00
intent_p30.00
qform_nameScanner Anat
qform_code1
qto_xyz:1-1.995593 0.092717 0.094922 93.702583
qto_xyz:2-0.105543 -1.976261 -0.288537 118.937881
qto_xyz:30.080419 -0.292910 1.976799 -43.901993
qto_xyz:40.00 0.00 0.00 1.00
qform_xorientRight-to-Left
qform_yorientAnterior-to-Posterior
qform_zorientInferior-to-Superior
sform_nameScanner Anat
sform_code1
sto_xyz:1-1.995594 0.092714 0.094922 93.702583
sto_xyz:2-0.105540 -1.976261 -0.288537 118.937881
sto_xyz:30.080420 -0.292910 1.976799 -43.901993
sto_xyz:40.00 0.00 0.00 1.00
sform_xorientRight-to-Left
sform_yorientAnterior-to-Posterior
sform_zorientInferior-to-Superior
file_typeNIFTI-1+
file_code1
descrip6.0.1
aux_file


The output of "fslhd ${WD}/Mask.nii.gz" is:

filenameMask.nii.gz
size of header348
data_typeFLOAT32
dim04
dim1104
dim2104
dim372
dim43
dim51
dim61
dim71
vox_unitsmm
time_unitss
datatype16
nbyper4
bitpix32
pixdim01.00
pixdim12.00
pixdim22.00
pixdim32.00
pixdim47.70
pixdim50.00
pixdim60.00
pixdim70.00
vox_offset352
cal_max0.00
cal_min0.00
scl_slope1.00
scl_inter0.00
phase_dim0
freq_dim0
slice_dim0
slice_nameUnknown
slice_code0
slice_start0
slice_end0
slice_duration0.00
toffset0.00
intentUnknown
intent_code0
intent_name
intent_p10.00
intent_p20.00
intent_p30.00
qform_nameScanner Anat
qform_code1
qto_xyz:1-1.995593 0.092717 0.094922 93.702583
qto_xyz:2-0.105543 -1.976261 -0.288537 118.937881
qto_xyz:30.080419 -0.292910 1.976799 -43.901993
qto_xyz:40.00 0.00 0.00 1.00
qform_xorientRight-to-Left
qform_yorientAnterior-to-Posterior
qform_zorientInferior-to-Superior
sform_nameScanner Anat
sform_code1
sto_xyz:1-1.995594 0.092714 0.094922 93.702583
sto_xyz:2-0.105540 -1.976261 -0.288537 118.937881
sto_xyz:30.080420 -0.292910 1.976799 -43.901993
sto_xyz:40.00 0.00 0.00 1.00
sform_xorientRight-to-Left
sform_yorientAnterior-to-Posterior
sform_zorientInferior-to-Superior
file_typeNIFTI-1+
file_code1
descrip6.0.1
aux_file

We noticed that running "${FSLDIR}/bin/fslmaths ${WD}/BothPhases -abs -add 1 
-mas ${WD}/Mask -dilM -dilM -dilM -dilM -dilM ${WD}/BothPhases_fsl5" seems to 
work with FSL 5.0.6 at least. The header of the image "BothPhases_fsl5 ", 
generated with FSL 5.0.6 , is:

filename   BothPhases_fsl5.nii.gz

sizeof_hdr 348
data_type  FLOAT32
dim0   4
dim1   104
dim2   104
dim3   72
dim4   6
dim5   1
dim6   1
dim7   1
vox_units  mm
time_units s
datatype   16
nbyper 4
bitpix 32
pixdim00.00
pixdim12.00
pixdim22.00
pixdim32.00
pixdim47.70
pixdim50.00
pixdim60.00
pixdim70.00
vox_offset 352
cal_max0.
cal_min0.
scl_slope  1.00
scl_inter  0.00
phase_dim  0
freq_dim   0
slice_dim  0
slice_name Unknown
slice_code

Re: [HCP-Users] A question about the HCP brain registration

2019-05-20 Thread Glasser, Matthew
No, but you could make one pretty easily.  Scenes are text files and if your 
data are structured in a regular format, you could make a template scene and 
sed replace stuff in and out of the scene.  The PostFix does this for 
propagating a template scene for checking sICA components.

Matt.

From:  on behalf of Aaron C 

Date: Monday, May 20, 2019 at 12:48 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] A question about the HCP brain registration

Dear HCP experts,

Is there any shared script to generate scenes to check brain registration 
quality of the HCP rfMRI and tfMRI data? Thank you.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] volumetric segmentation of subcortical structures

2019-05-16 Thread Glasser, Matthew
${StudyFolder}/${Subject}/MNINonLinear/wmparc.nii.gz in the structural packages.

Matt.

From:  on behalf of "Mazzetti, C. 
(Cecilia)" 
Date: Thursday, May 16, 2019 at 3:51 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] volumetric segmentation of subcortical structures


Dear all,

I am required to come up with a proof of concept for subcortical 
lateralizations found in my study. Ideally, consistent lateralizations derived 
from a. bigger dataset such as the HCP one, would do the job more than fine. A 
colleague told me there should be a file somewhere, with structural 
segmentation data (i.e., volumes) already done for the MRIs in the database. I 
am wondering whether someone knows if: 1. this is true, 2. if yes, is it 
possible to access? and how ?



Thanks very much in advance to anyone willing to help


Best,
Cecilia


​

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Quick questions

2019-05-14 Thread Glasser, Matthew
LR/RL/AP/PA refer to phase encoding directions.  1/2/3/4 refer to acquisition 
days.  I use all of the resting state data when doing analyses.

3T: TR=0.72s, 2x2x2mm, 4x1200 frames
7T: TR=1s, 1.6x1.6x1.6mm, 4x900 frames

Matt.

From: Yoav Feldman 
Date: Tuesday, May 14, 2019 at 1:38 PM
To: "Glasser, Matthew" 
Cc: Erez Simony , michael tolochinsky 

Subject: Quick questions

Dear Matthew,

We are looking at the HCP resting-state data (grayordinate) coming from the  7T 
and 3T magnets, and have quick questions:

1) We would like to work on results of 3T and 7T data of the same 184 subjects 
that has both of them.
For the 7T there is 1 result directory of data per each resting state scan 
(1-4), for each subject:

[Screen Shot 2019-05-14 at 21.34.14.png]

For the 3T we see the following 2 directories per each resting state scan (1 & 
2):

[Screen Shot 2019-05-14 at 17.06.57.png]

what are the differences between the 2 directories LR and RL ?  which directory 
should we use?

2) Where can we find the parameters of the 7T scan (resting state ) and 3T scan 
(resting state) ? in particular , what is the difference in spatial resolution 
between the 2 magnets in resting state? (in common space)

We much appreciate your help!

Thank you in advance
Yoav and Michael



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] error about subcortical processing in HCP fMRI-surface pipeline

2019-05-14 Thread Glasser, Matthew
I guess this is an excellent example of why folks need to keep CCing the list 
and including their previous replies in their messages.  This user did solve 
the problem by trying FSL 6.0.1, which is required by version 4.0.0 of the HCP 
Pipelines.

Matt.

From: Wilhelm Malloni 
Date: Tuesday, May 14, 2019 at 3:40 AM
To: "Glasser, Matthew" 
Cc: "simon.w...@ur.de" 
Subject: error about subcortical processing in HCP fMRI-surface pipeline

Dear Dr. Glasser,

we are experiencing the very same error posted here 
https://www.mail-archive.com/hcp-users@humanconnectome.org/msg07616.html but we 
didn’t find a solution.

After the GenericfMRIVolumeProcessingPipeline.sh we got a resolution of 
91x108x91 but the GenericFMRISurfaceProcessingPipeline.sh probably needs a 
resolution of 91x109x91.

So, while running the:

wb_command -cifti-create-dense-timeseries /{PATH to

subject}/MNINonLinear/Results/{task name}/{task name}_temp_subject.dtseries.nii

-volume /{PATH to subject}/MNINonLinear/Results/{task name}/{task name}.nii.gz

/{PATH to subject}/MNINonLinear/ROIs/ROIs.2.nii.gz



we got this error too:

ERROR: label volume has a different volume space than data volume



How can we overcome this error?







Environment:

1. Debian 9.0

2. HCP pipeline 4.0.0

3. Workbench 1.3.2

4. FreeSurfer 6.0

5. FSL 5.0.11




Thank you so much for your precious time.


Best,

Wilhelm Malloni
---
Dr. rer. nat. Wilhelm Malloni
Experimental Psychology
University of Regensburg
Universitaetsstrasse 31
93053 Regensburg, Germany
Tel: ++49 941 943 3856
Fax: ++49 941 943 3233
Room: PT 4.0.36B
email: wilhelm.mall...@ur.de<mailto:wilhelm.mall...@ur.de>



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Local Gyrification Index

2019-05-11 Thread Glasser, Matthew
We didn’t generate any non-default FreeSurfer outputs.

Matt.

From: Reza Rajimehr 
Date: Saturday, May 11, 2019 at 12:46 PM
To: "Glasser, Matthew" 
Subject: Re: [HCP-Users] Local Gyrification Index

Thanks Matt! LGI should be a map (a value for each vertex). It is not generated 
by default in Freesurfer.


On Sat, May 11, 2019 at 6:38 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
I believe the all the default FreeSurfer stats made it into the database in one 
way or another.  Have you looked?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Reza Rajimehr mailto:rajim...@gmail.com>>
Date: Saturday, May 11, 2019 at 12:36 PM
To: hcp-users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Local Gyrification Index

Hi,

Does HCP database have Freesurfer-derived LGI for all 1200 subjects?

Best,
Reza

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Local Gyrification Index

2019-05-11 Thread Glasser, Matthew
I believe the all the default FreeSurfer stats made it into the database in one 
way or another.  Have you looked?

Matt.

From:  on behalf of Reza Rajimehr 

Date: Saturday, May 11, 2019 at 12:36 PM
To: hcp-users 
Subject: [HCP-Users] Local Gyrification Index

Hi,

Does HCP database have Freesurfer-derived LGI for all 1200 subjects?

Best,
Reza

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

2019-05-10 Thread Glasser, Matthew
MSMAll is supposed to have higher distortions than MSMSulc.  This is by design 
as we are very conservative about distortions with MSMSulc to avoid overfitting 
to folds (a rampant problem in both folding-based surface registration and 
volume-based registration).  Whether MSMAll was working is not determined 
solely by the distortions, but rather their magnitude with respect to known 
good data, the quality of the alignment itself, and the test-retest 
reproducibility of the alignment.

Matt.

From: Maria Sison 
Date: Friday, May 10, 2019 at 3:15 PM
To: "Glasser, Matthew" , "Harms, Michael" 
, Steve Smith 
Cc: HCP 讨论组 
Subject: RE: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Thank you for all of the insightful comments, this has been a huge help. I 
think we’ll stick with MSMSulc measures until we have a clearer idea of what 
data we’d need to confidently run MSMAll. I compared strain maps for MSMSulc 
and MSMAll and in general the values are more extreme for MSMAll. I also made 
maps of areal distortion (-surface-distortion) between test and retest MSMAll 
midthickness and these were more extreme across the whole cortex when compared 
to MSMSulc test retest distortion maps, so this seems to line up with what 
Michael was saying about our MSMAll registrations.

Best,
Maria

________
From: Glasser, Matthew 
Sent: Thursday, May 9, 2019 9:22:51 PM
To: Harms, Michael; Maria Sison; Steve Smith
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

This issue of what kind of data and how much is something we plan to 
investigate in detail for MSMAll (and the cortical areal classifier).

Matt.

From: "Harms, Michael" 
Date: Thursday, May 9, 2019 at 4:41 PM
To: "Glasser, Matthew" , Maria Sison 
, Steve Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data


While I’m not surprised that the ICCs would be lower for an anatomical-based 
measure for MSMAll than MSMSulc, I am surprised by the magnitude of the change 
(from 0.9 to 0.65), especially for a parcellated analysis, since only changes 
in the precise border of the parcellations should be affecting the results.

Your results imply that the test and retest MSMAll registrations are very 
different from each other.

Wouldn’t the Strain maps for MSMSulc vs MSMAll be informative here?  You might 
also want to examine some sort of measure of the distortion between the two 
MSMAll registrations directly.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From:  on behalf of "Glasser, Matthew" 

Date: Thursday, May 9, 2019 at 4:23 PM
To: Maria Sison , Stephen Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Not running sICA+FIX might well be a part of the problem.  The TR is quite long 
as Steve says, which will limit the accuracy of sICA+FIX cleanup some also.  
Also, surface area and thickness might prefer MSMSulc due to correlations with 
folding patterns.  Myelin, task, and resting state fMRI will more tend to 
correlate with MSMAll.

Matt.

From:  on behalf of Maria Sison 

Date: Thursday, May 9, 2019 at 3:40 PM
To: Steve Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Thank you so much, this is very helpful and interesting to think about. We 
concatenated rest and tasks and regressed out tasks to get around 1000 TRs of 
pseudo-rest which we then used for MSMAll. Still not nearly as much as HCP, but 
I would be interested to hear what a ballpark minimum data requirement for 
MSMALL would be.

Best,
Maria


From: Steve Smith 
Sent: Thursday, May 9, 2019 4:19:24 PM
To: Maria Sison
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Hi - probably the single primary thing is number of timepoints - though things 
like TR and spatial resolution will also affect this.

My guess is still that probably you don't have enough timepoints here to get 
decent single-subject RSN maps (decen enough for MSMALL that is).  Emma or Matt 
might have more direct insight into the minimum amount of data you need to get 
MSMALL working well.   Unless you can combine more of your datasets together 
(even if just for the purposes of MSM) then you might be better off with 
MSMSULC.

Cheers.

ps with this setup I would definitely push multiband at least as high as 6 if 
not 8.










On 9 May 2019, at 15:12, Maria Sison 
mailto:maria.si...@duke.edu>> wrote:

Hello,

Here’s our rfMRI protocol: each participant was scanned using a Siemens Skyra 
3T scanner equipped with a 64-channel head/neck coil. A series of 

Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

2019-05-09 Thread Glasser, Matthew
This issue of what kind of data and how much is something we plan to 
investigate in detail for MSMAll (and the cortical areal classifier).

Matt.

From: "Harms, Michael" 
Date: Thursday, May 9, 2019 at 4:41 PM
To: "Glasser, Matthew" , Maria Sison 
, Steve Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data


While I’m not surprised that the ICCs would be lower for an anatomical-based 
measure for MSMAll than MSMSulc, I am surprised by the magnitude of the change 
(from 0.9 to 0.65), especially for a parcellated analysis, since only changes 
in the precise border of the parcellations should be affecting the results.

Your results imply that the test and retest MSMAll registrations are very 
different from each other.

Wouldn’t the Strain maps for MSMSulc vs MSMAll be informative here?  You might 
also want to examine some sort of measure of the distortion between the two 
MSMAll registrations directly.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: mha...@wustl.edu

From:  on behalf of "Glasser, Matthew" 

Date: Thursday, May 9, 2019 at 4:23 PM
To: Maria Sison , Stephen Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Not running sICA+FIX might well be a part of the problem.  The TR is quite long 
as Steve says, which will limit the accuracy of sICA+FIX cleanup some also.  
Also, surface area and thickness might prefer MSMSulc due to correlations with 
folding patterns.  Myelin, task, and resting state fMRI will more tend to 
correlate with MSMAll.

Matt.

From:  on behalf of Maria Sison 

Date: Thursday, May 9, 2019 at 3:40 PM
To: Steve Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Thank you so much, this is very helpful and interesting to think about. We 
concatenated rest and tasks and regressed out tasks to get around 1000 TRs of 
pseudo-rest which we then used for MSMAll. Still not nearly as much as HCP, but 
I would be interested to hear what a ballpark minimum data requirement for 
MSMALL would be.

Best,
Maria


From: Steve Smith 
Sent: Thursday, May 9, 2019 4:19:24 PM
To: Maria Sison
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Hi - probably the single primary thing is number of timepoints - though things 
like TR and spatial resolution will also affect this.

My guess is still that probably you don't have enough timepoints here to get 
decent single-subject RSN maps (decen enough for MSMALL that is).  Emma or Matt 
might have more direct insight into the minimum amount of data you need to get 
MSMALL working well.   Unless you can combine more of your datasets together 
(even if just for the purposes of MSM) then you might be better off with 
MSMSULC.

Cheers.

ps with this setup I would definitely push multiband at least as high as 6 if 
not 8.









On 9 May 2019, at 15:12, Maria Sison 
mailto:maria.si...@duke.edu>> wrote:

Hello,

Here’s our rfMRI protocol: each participant was scanned using a Siemens Skyra 
3T scanner equipped with a 64-channel head/neck coil. A series of 72 
interleaved axial T2-weighted functional slices were acquired using a 3-fold 
multi-band accelerated echo planar imaging sequence with the following 
parameters: TR = 2000 ms, TE = 27 msec, flip angle = 90°, field-of-view = 200 
mm, voxel size = 2 mm isotropic, slice thickness = 2 mm without gap. Total scan 
length is 496 s.

Out of curiosity, which parameters would be most important for MSMAll?

Thank you,
Maria


From: Steve Smith mailto:st...@fmrib.ox.ac.uk>>
Sent: Thursday, May 9, 2019 3:56:49 PM
To: Maria Sison
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Hi - what is your rfMRI protocol?   It might be that you're right that the 
difference is in the preprop - but my first guess might be that - if the rfMRI 
data is not as high quality as HCP rfMRI data - it might not be good enough to 
reliably drive MSMALL?

Cheers.






On 9 May 2019, at 14:45, Maria Sison 
mailto:maria.si...@duke.edu>> wrote:

Dear experts,

We have run the HCP minimal preprocessing pipelines on our data (1 mm isotropic 
T1w and FLAIR + rest and 4 tasks) and compared test-retest reliability for 
MSMSulc and MSMAll in 20 subjects. Specifically, we looked at intraclass 
correlations for parcellated cortical thickness and surface area and found that 
they were much lower for MSMAll compared to MSMSulc in our test-retest sample 
(MSMSulc on average above 0.9 and for MSMAll around 0.65 on average). When we 
looked in HCP retest data, the ICCs for MSMAll were more similar t

Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

2019-05-09 Thread Glasser, Matthew
Not running sICA+FIX might well be a part of the problem.  The TR is quite long 
as Steve says, which will limit the accuracy of sICA+FIX cleanup some also.  
Also, surface area and thickness might prefer MSMSulc due to correlations with 
folding patterns.  Myelin, task, and resting state fMRI will more tend to 
correlate with MSMAll.

Matt.

From:  on behalf of Maria Sison 

Date: Thursday, May 9, 2019 at 3:40 PM
To: Steve Smith 
Cc: HCP 讨论组 
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Thank you so much, this is very helpful and interesting to think about. We 
concatenated rest and tasks and regressed out tasks to get around 1000 TRs of 
pseudo-rest which we then used for MSMAll. Still not nearly as much as HCP, but 
I would be interested to hear what a ballpark minimum data requirement for 
MSMALL would be.

Best,
Maria


From: Steve Smith 
Sent: Thursday, May 9, 2019 4:19:24 PM
To: Maria Sison
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Hi - probably the single primary thing is number of timepoints - though things 
like TR and spatial resolution will also affect this.

My guess is still that probably you don't have enough timepoints here to get 
decent single-subject RSN maps (decen enough for MSMALL that is).  Emma or Matt 
might have more direct insight into the minimum amount of data you need to get 
MSMALL working well.   Unless you can combine more of your datasets together 
(even if just for the purposes of MSM) then you might be better off with 
MSMSULC.

Cheers.

ps with this setup I would definitely push multiband at least as high as 6 if 
not 8.







On 9 May 2019, at 15:12, Maria Sison 
mailto:maria.si...@duke.edu>> wrote:

Hello,

Here’s our rfMRI protocol: each participant was scanned using a Siemens Skyra 
3T scanner equipped with a 64-channel head/neck coil. A series of 72 
interleaved axial T2-weighted functional slices were acquired using a 3-fold 
multi-band accelerated echo planar imaging sequence with the following 
parameters: TR = 2000 ms, TE = 27 msec, flip angle = 90°, field-of-view = 200 
mm, voxel size = 2 mm isotropic, slice thickness = 2 mm without gap. Total scan 
length is 496 s.

Out of curiosity, which parameters would be most important for MSMAll?

Thank you,
Maria


From: Steve Smith mailto:st...@fmrib.ox.ac.uk>>
Sent: Thursday, May 9, 2019 3:56:49 PM
To: Maria Sison
Cc: HCP 讨论组
Subject: Re: [HCP-Users] MSMAll vs. MSMSulc reliability in our data

Hi - what is your rfMRI protocol?   It might be that you're right that the 
difference is in the preprop - but my first guess might be that - if the rfMRI 
data is not as high quality as HCP rfMRI data - it might not be good enough to 
reliably drive MSMALL?

Cheers.




On 9 May 2019, at 14:45, Maria Sison 
mailto:maria.si...@duke.edu>> wrote:

Dear experts,

We have run the HCP minimal preprocessing pipelines on our data (1 mm isotropic 
T1w and FLAIR + rest and 4 tasks) and compared test-retest reliability for 
MSMSulc and MSMAll in 20 subjects. Specifically, we looked at intraclass 
correlations for parcellated cortical thickness and surface area and found that 
they were much lower for MSMAll compared to MSMSulc in our test-retest sample 
(MSMSulc on average above 0.9 and for MSMAll around 0.65 on average). When we 
looked in HCP retest data, the ICCs for MSMAll were more similar to those for 
MSMSulc (both above 0.9), but still slightly lower.

There are a few major differences in how we ran the pipeline. We skipped 
sICA+FIX and ran our own preprocessing on task and rest fMRI after fMRIVolume 
but before fMRISulc (bandpass filtering, motion correction, censoring, 
CompCorr, and regressed out tasks). We thought our processing would be ok for 
cleaning task fMRI, but I see that sICA+FIX is highly recommended before 
running MSMAll 
(https://www.mail-archive.com/hcp-users@humanconnectome.org/msg06876.html),
 so I’m planning to try to rerun with sICA+FIX. Do you think that MSMAll is so 
dependent on sICA+FIX that it could be causing these problems in our data or do 
you have any other ideas about why we're getting such a large drop in ICCs for 
MSMAll? In other words, what are the minimal preprocessing requirements to 
effectively use MSMAll in non-HCP data? Any comments would be appreciated!

Thank you,
Maria
___
HCP-Users mailing list
HCP-Users@humanconnectome.org

Re: [HCP-Users] temporal ICA code

2019-05-08 Thread Glasser, Matthew
We haven’t released the code for a temporal ICA pipeline yet because we don’t 
have an automated pipeline.  We do hope to get the HCP-YA data cleaned with 
temporal ICA and released and make an automated pipeline.  For a group spatial 
ICA, one needs to use MIGP.  For group temporal ICA, the memory size is 
determined by the number of spatial ICA components x the number of timepoints.

Matt.

From:  on behalf of Joseph Orr 

Date: Wednesday, May 8, 2019 at 10:26 AM
To: HCP Users 
Subject: [HCP-Users] temporal ICA code

Is the code for performing temporal ICA denoising (Glasser et al., 2018) 
available in the Pipeline repository? I didn't see anything but wanted to make 
sure I didn't miss it. I was processing denoised resting state data from 219 
HCP participants in CONN, but need to re-do the analysis with the surface data. 
While CONN can take the surface data, I'd rather just use the MSMAll cifti 
data. I know you guys have said you were planning on releasing the temporal-ICA 
cleaned data, but I figure this might take awhile, so I'd like to clean the 
data myself since I need the data for a paper revision.

How much RAM would I need for processing group-ICA on 210 subjects?

Thanks,
Joe

--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Gambling task: clarification of Reward vs Loss being mostly random noise

2019-05-08 Thread Glasser, Matthew
Those are the observations.  Perhaps Greg might have some thoughts on the 
explanations.

Matt.

From:  on behalf of Filip Grill 

Date: Wednesday, May 8, 2019 at 8:31 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Gambling task: clarification of Reward vs Loss being 
mostly random noise

Dear HCP users

A while ago I submitted a question regarding the gambling task to the mailing 
list and it was mentioned that the contrast reward>loss did not work very well 
(link to comment: 
https://www.mail-archive.com/hcp-users@humanconnectome.org/msg06435.html). In 
another thread regarding the gambling task Matt states that he would not trust 
the results of the Reward vs Loss contrast, as they are mostly random noise 
with some structured artefact (link to comment 
https://www.mail-archive.com/hcp-users@humanconnectome.org/msg06855.html).

I am wondering if we could get some more clarification of why this contrast did 
not work very well and why this contrast would contain random noise with some 
structured artefacts?

/Filip Grill

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Probabilistic tractography for dense connectome

2019-05-01 Thread Glasser, Matthew
Is that because this is a native space grayordinates instead of an MNI space 
grayordinates and thus the masks are subject specific?

Matt.

From: Aaron C mailto:aaroncr...@outlook.com>>
Date: Wednesday, May 1, 2019 at 12:27 PM
To: Stamatios Sotiropoulos 
mailto:stamatios.sotiropou...@nottingham.ac.uk>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Probabilistic tractography for dense connectome

Hi Stam,

I tried your PreTractography script to generate these files needed for 
probtrackx2, and then used the following command (the command from the HCP 
course for generating dense connectome):

probtrackx2_gpu --samples=../T1w/Diffusion.bedpostX/merged \
--mask=../T1w/Diffusion.bedpostX/nodif_brain_mask \
--xfm=xfms/standard2acpc_dc \
--invxfm=xfms/acpc_dc2standard --seedref=T1w_restore.2.nii.gz \
--loopcheck --forcedir -c 0.2 --sampvox=2 --randfib=1 \
--stop=Connectomes/stop --wtstop=Connectomes/wtstop \
--waypoints=ROIs/Whole_Brain_Trajectory_ROI_2 \
-x ROIs/Whole_Brain_Trajectory_ROI_2 --omatrix3 \
--target3=Connectomes/Grayordinates.txt --dir=Connectomes
The command completed without error. I then used the MATLAB to load the 
connectivity matrix:

x=load(''fdt_matrix3.dot');
M=spconvert(x);

However, the dimension of M is only 86392 x 86392, not 91282 x 91282.

So I tried the same probtrackx2 command, but instead used the files from the 
HCP course virtual machine for the input to probtrackx2 (so this time I know 
the input files should be correct), but the dimension is still 86392 x 86392, 
not 91282 x 91282.

If possible, would you please give me some hints to find out the missing 
grayordinates in this connectivity matrix? Thank you!

Aaron

From: Stamatios Sotiropoulos 
mailto:stamatios.sotiropou...@nottingham.ac.uk>>
Sent: Friday, April 26, 2019 10:58 AM
To: Aaron C
Cc: Glasser, Matthew; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Probabilistic tractography for dense connectome

Hi Aaron

You need the PreTractography script, available in one of the branches of the 
WU-pipelines.

https://github.com/Washington-University/HCPpipelines/tree/diffusion-tractography/DiffusionTractography

Best wishes
Stam



On 26 Apr 2019, at 15:50, Aaron C 
mailto:aaroncr...@outlook.com>> wrote:

Hi Matt,

Thank you for letting me know. The full command I mentioned is as follows:

probtrackx2 --samples=../T1w/Diffusion.bedpostX/merged \
--mask=../T1w/Diffusion.bedpostX/nodif_brain_mask \
--xfm=xfms/standard2acpc_dc \
--invxfm=xfms/acpc_dc2standard --seedref=T1w_restore.2.nii.gz \
--loopcheck --forcedir -c 0.2 --sampvox=2 --randfib=1 \
--stop=Connectome/stop --wtstop=Connectome/wtstop \
--waypoints=ROIs/Whole_Brain_Trajectory_ROI_2 \
-x ROIs/Whole_Brain_Trajectory_ROI_2 --omatrix3 \
--target3=Connectomes/GrayOrdinates.txt --dir=Connectomes

It's in the HCP course practical "Fibre Orientation Models and Tractography 
Analysis" taught by Matteo Bastiani. Thank you.

From: Glasser, Matthew mailto:glass...@wustl.edu>>
Sent: Thursday, April 25, 2019 7:04 PM
To: Aaron C; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Stamatios Sotiropoulos
Subject: Re: [HCP-Users] Probabilistic tractography for dense connectome

Not as far as I am aware, but Stam might know.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Aaron C mailto:aaroncr...@outlook.com>>
Date: Thursday, April 25, 2019 at 9:15 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Probabilistic tractography for dense connectome

Dear HCP experts,

I have a question about the probabilistic tractography command used for 
generating dense connectome 
(https://wustl.app.box.com/s/wna2cu94pqgt8zskg687mj8zlmfj1pq7). Are there any 
shared scripts for generating "pial.L.asc", "white.L.asc", 
"Whole_Brain_Trajectory_ROI_2.nii.gz", and the files such as 
"CIFTI_STRUCTURE_ACCUMBENS_LEFT.nii.gz" used in the probabilistic tractography 
command?
___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users




The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in er

Re: [HCP-Users] Reporting dense analysis results

2019-04-25 Thread Glasser, Matthew
If you want to make a cluster table, I think percent overlaps with areas is a 
very reasonable way to do it.  I would recommend you follow Tim’s suggestion 
with vertex areas as well.  I would strongly recommend sharing your data as 
well (if it is CIFTI/GIFTI/NIFTI, the balsa.wustl.edu database is designed for 
it).

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Thursday, April 25, 2019 at 1:55 PM
To: "Stevens, Michael" 
mailto:michael.stev...@hhchealth.org>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Reporting dense analysis results

For just finding the overlap of some (positive-only) map with the parcels, the 
script would likely be a lot simpler if you used -cifti-parcellate with the 
"-method SUM" option (when doing so, I would also recommend using vertex areas, 
so that the resulting numbers are surface-area integrals rather than based on 
number of vertices).  You can then use -cifti-stats SUM to get the total, and 
divide by that in -cifti-math to get percentages.

Sharing the data files of the results means that to some extent, tables may not 
be as necessary.  I don't have a strong opinion here.  Personally, I like 
figures, but I haven't done/used meta-analysis.

Tim


On Thu, Apr 25, 2019 at 8:50 AM Stevens, Michael 
mailto:michael.stev...@hhchealth.org>> wrote:
Hi folks,

Yesterday’s question/replies on reporting tables of pscalar results prompted us 
to ask about a related question – I’m wondering what HCP folks recommend in 
terms of the format of tabulating/reporting straightforward “activation 
results” for DENSE data?  I couldn’t find a prior listserv post that exactly 
addressed this question, nor did a couple passes through recently published 
literature using HCP methodology turn up a good example to follow.  Could be 
I’m just missing stuff…

We’re finishing up analyses on a somewhat conceptually novel analysis that we 
think might be received at peer review better if we report the dense results.  
So we sorta envision reporting a table of clusters/cluster peaks where we refer 
to the 2017 parcellation paper for annotations, e.g., “Cluster 1 – Left IFSp 
(72%), Left IFJa (26%), Left IFSa (2%)”.  To get there, I’m picturing a 
do-able, yet somewhat awkward combination of cluster finding calls, label file 
references, ROI definitions, finding peaks/center-of-mass, and then a whole a 
bunch of –cifti-math operations to determine overlap of clusters vs. parcels… 
The number of steps/operations that would go into this is enough that I’m just 
brought up short thinking, “Wait, am I possibly missing something…”

Before I start going down this path in coding something like this up, I thought 
I’d check two things:

A) Is there a different conceptual approach altogether that you’d recommend 
considering for showcasing dense analysis results?  Our goal ultimately is to 
simply reinforce our results are fairly compatible with the demarcations of the 
360-parcel atlas to remove a potential reviewer criticism (this analysis is 
some weird stuff… using spontaneous fluctuations of electrodermal signals as 
event-onsets for fMRI timeseries analyses… amazingly, it seemed to work, with 
pretty interesting results that mirror our connectivity analyses on the same 
data).  But if HCP has an entirely different approach to tabulating/summarizing 
dense results, we’d welcome being brought up-to-speed.

B) The lazy part of me wonders… Has someone already coded up workbench function 
call or even a script for the various wb_commands needed that might already do 
this sort of thing with dense data?  Again, this seems so meat-and-potatoes for 
fMRI that we don’t want to re-invent the wheel here.

Thanks,
Mike


This e-mail message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure, or distribution is prohibited. If you 
are not the intended recipient, or an employee or agent responsible for 
delivering the message to the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message, including any 
attachments.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or 

Re: [HCP-Users] Probabilistic tractography for dense connectome

2019-04-25 Thread Glasser, Matthew
Not as far as I am aware, but Stam might know.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Aaron C mailto:aaroncr...@outlook.com>>
Date: Thursday, April 25, 2019 at 9:15 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Probabilistic tractography for dense connectome

Dear HCP experts,

I have a question about the probabilistic tractography command used for 
generating dense connectome 
(https://wustl.app.box.com/s/wna2cu94pqgt8zskg687mj8zlmfj1pq7). Are there any 
shared scripts for generating "pial.L.asc", "white.L.asc", 
"Whole_Brain_Trajectory_ROI_2.nii.gz", and the files such as 
"CIFTI_STRUCTURE_ACCUMBENS_LEFT.nii.gz" used in the probabilistic tractography 
command?

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] "activation" tables for reporting pscalar results

2019-04-24 Thread Glasser, Matthew
I do not report MNI coordinates for any studies because I don’t think they add 
a lot of value for the reasons described in Tim’s PNAS paper.  Studies that 
provide actual results on the surface are much more useful, and I have 
extensively used such studies to make incisive neuroanatomical comparisons.  
I’ve also yet to be asked to provide MNI coordinates by a peer reviewer.  I 
think if you share the actual data, MNI coordinates are superfluous and if you 
use a well defined neuroanatomical parcellation such as the HCP’s multi-modal 
parcellation, it is fine to talk about findings in particular brain areas (if 
you actually check to see that your findings overlap with the brain areas you 
name—i.e. don’t just eyeball vs a picture on the wall).

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Wednesday, April 24, 2019 at 1:47 PM
To: Joseph Orr mailto:joseph@tamu.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] "activation" tables for reporting pscalar results

We recommend sharing the results as data files (as mentioned, this is the 
intent of BALSA), even if you choose to report MNI coordinates in the text.  
Something to keep in mind is that group average surfaces do not behave like 
group average volume data, the surface gets smoothed out wherever folding 
patterns aren't fully aligned, resulting in a surface that does not approach 
gyral crowns or sulcal fundi (most notably with functional alignment such as 
MSMAll - freesurfer-aligned surfaces will average to something with more 
folding preserved, at the cost of functional locality, but there are still 
locations with high variability in folding patterns across subjects that will 
still get smoothed out on a group average surface).  See supplementary 
material, figure S1, and figure S9 panel B2, from our paper on the effects of 
volume-based methods:

https://www.ncbi.nlm.nih.gov/pubmed/29925602

If meta analysis of this sort is only intended to give a very rough idea of 
location, even this may not be a deal breaker.  You can use wb_command 
-surface-coordinates-to-metric to get the coordinates as data, use 
-cifti-create-dense-from-template to convert that to cifti, and then use 
-cifti-parcellate on that to get center of gravity coordinates of the vertices 
used.  Note that these center of gravity coordinates could be a distance away 
from the surface, due to curvature.

Tim


On Wed, Apr 24, 2019 at 11:06 AM Joseph Orr 
mailto:joseph@tamu.edu>> wrote:
True - these kind of tools generally assume certain degrees of smoothing, which 
isn't the case with surface-based. And activation based meta-analysis will 
apply a kernel that will likely extend outside the brain for a surface 
activation that is not within a sulcus. I'd be curious to hear what those more 
familiar with meta-analytic methods think about how surface-based results can 
be incorporated with volumetric results.
--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Wed, Apr 24, 2019 at 11:00 AM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

Well, that raises the question if surface-based results should just be 
automatically “lumped in” with volume-based results by tools such as neurosynth 
to begin with…

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: Joseph Orr mailto:joseph@tamu.edu>>
Date: Wednesday, April 24, 2019 at 10:51 AM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] "activation" tables for reporting pscalar results

Well I am planning on doing that, but that doesn't necessarily help with 
automated meta-analytic tools like neurosynth that mine for tables.
--
Joseph M. Orr, Ph.D.
Assistant Professor
Department of Psychological and Brain Sciences
Texas A Institute for Neuroscience
Texas A University
College Station, TX


On Wed, Apr 24, 2019 at 10:36 AM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

Why not simply report the parcel name and its values?  And consider putting the 
scene on BALSA, so that others can easily access the data.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 

Re: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz volume data for some (one?) subjects?

2019-04-23 Thread Glasser, Matthew
Also ReApplyFixMultiRunPipeline.sh is not affected at all by this bug.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Matt Glasser mailto:glass...@wustl.edu>>
Date: Tuesday, April 23, 2019 at 7:47 PM
To: Timothy Coalson mailto:tsc...@mst.edu>>, "Harms, Michael" 
mailto:mha...@wustl.edu>>
Cc: HCP Users 
mailto:HCP-Users@humanconnectome.org>>
Subject: Re: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz 
volume data for some (one?) subjects?

This is a bug affecting the use of ReApplyFix to re-clean data using a hand 
classification instead of the original FIX classification (and does not affect 
the use of ReApplyFix in MSMAll).  It has been present since the release of 
ReApplyFix and will affect the volume data of all HCP-YA subjects for which a 
hand classification was used (list will be forthcoming).  The bug will be fixed 
in the next bugfix release of the HCP Pipelines and the data corrected.  Sorry 
about this.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Tuesday, April 23, 2019 at 6:12 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: HCP Users 
mailto:HCP-Users@humanconnectome.org>>
Subject: Re: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz 
volume data for some (one?) subjects?

Correction, the issue to follow is #107:

https://github.com/Washington-University/HCPpipelines/issues/107

Tim


On Tue, Apr 23, 2019 at 4:35 PM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

For users that want to follow this, please see:
https://github.com/Washington-University/HCPpipelines/issues/108

It has something to do with the fact that we needed to apply manual 
reclassification of the FIX output in that particular subject/run.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Keith Jamison mailto:kjami...@umn.edu>>
Date: Tuesday, April 23, 2019 at 3:59 PM
To: HCP Users 
mailto:HCP-Users@humanconnectome.org>>
Subject: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz volume 
data for some (one?) subjects?

For subject 204218, both REST1_LR and REST1_RL, I noticed a linear trend in the 
*_hp2000_clean.nii.gz NIFTI time series, but the hp2000_clean.dtseries.nii 
CIFTI files do not have this trend. See attached figures showing this issue for 
both REST1_LR and REST1_RL for 204218. The overall mean time series has a 
negative trend for NIFTI, but in the voxel time series on the left you can see 
that some have positive trend and some have negative. To test, I did run 
fslmaths-based filtering on hp2000_clean.nii.gz and I no longer see any linear 
trend.

I tried one scan in one additional subject, 102311 REST1_LR, and did not see 
this linear trend in either NIFTI or CIFTI (also attached).

Note: I did remove the overall mean for each voxel timecourse before
plotting, and for the NIFTI I'm only showing gray matter voxels, as determined 
by downsampling aparc+aseg.nii.gz and excluding labels for WM,CSF,ventricles, 
and a few misc. I also tried looking at all non-zero voxels, as well as only 
those marked in RibbonVolumeToSurfaceMapping/goodvoxels.nii.gz, but the issue 
of linear trends is the same.

Any idea what might be going on with this subject? I haven't tried this in 
anyone other than 204218 (bad) and 102311 (good).

-Keith



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended 

Re: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz volume data for some (one?) subjects?

2019-04-23 Thread Glasser, Matthew
This is a bug affecting the use of ReApplyFix to re-clean data using a hand 
classification instead of the original FIX classification (and does not affect 
the use of ReApplyFix in MSMAll).  It has been present since the release of 
ReApplyFix and will affect the volume data of all HCP-YA subjects for which a 
hand classification was used (list will be forthcoming).  The bug will be fixed 
in the next bugfix release of the HCP Pipelines and the data corrected.  Sorry 
about this.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Tuesday, April 23, 2019 at 6:12 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: HCP Users 
mailto:HCP-Users@humanconnectome.org>>
Subject: Re: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz 
volume data for some (one?) subjects?

Correction, the issue to follow is #107:

https://github.com/Washington-University/HCPpipelines/issues/107

Tim


On Tue, Apr 23, 2019 at 4:35 PM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

For users that want to follow this, please see:
https://github.com/Washington-University/HCPpipelines/issues/108

It has something to do with the fact that we needed to apply manual 
reclassification of the FIX output in that particular subject/run.

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Keith Jamison mailto:kjami...@umn.edu>>
Date: Tuesday, April 23, 2019 at 3:59 PM
To: HCP Users 
mailto:HCP-Users@humanconnectome.org>>
Subject: [HCP-Users] hp2000 filter not applied to hp2000_clean.nii.gz volume 
data for some (one?) subjects?

For subject 204218, both REST1_LR and REST1_RL, I noticed a linear trend in the 
*_hp2000_clean.nii.gz NIFTI time series, but the hp2000_clean.dtseries.nii 
CIFTI files do not have this trend. See attached figures showing this issue for 
both REST1_LR and REST1_RL for 204218. The overall mean time series has a 
negative trend for NIFTI, but in the voxel time series on the left you can see 
that some have positive trend and some have negative. To test, I did run 
fslmaths-based filtering on hp2000_clean.nii.gz and I no longer see any linear 
trend.

I tried one scan in one additional subject, 102311 REST1_LR, and did not see 
this linear trend in either NIFTI or CIFTI (also attached).

Note: I did remove the overall mean for each voxel timecourse before
plotting, and for the NIFTI I'm only showing gray matter voxels, as determined 
by downsampling aparc+aseg.nii.gz and excluding labels for WM,CSF,ventricles, 
and a few misc. I also tried looking at all non-zero voxels, as well as only 
those marked in RibbonVolumeToSurfaceMapping/goodvoxels.nii.gz, but the issue 
of linear trends is the same.

Any idea what might be going on with this subject? I haven't tried this in 
anyone other than 204218 (bad) and 102311 (good).

-Keith



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

2019-04-22 Thread Glasser, Matthew
A few weeks maybe if you want a pre-release version.

Matt.

From: Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 9:06 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

Thank you Matt. This is really helpful. Any idea when the new classifier you 
mentioned in 1. will be available?

On Mon, Apr 22, 2019 at 8:50 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
I guess I haven’t been in the habit of throwing out data like this.  Things I 
would consider would include:

  1.  MR+FIX classification accuracy (if runs were poorly classified, they 
won’t be denoised well).  I’ll note that we are training an improved MR+FIX 
classifier using a combination of HCP-YA resting state (single run FIX), HCP-YA 
task (MR+FIX), and HCP Lifespan (MR+FIX) to address classification issues we 
have observed with very large numbers of components, subject with very large 
amounts of motion, and other artifacts that were not a part of the HCP-YA 
original training data.
  2.  Unusually small numbers of signal components (though note we found a 
recent subtle bug whereby if melodic does not finish mixture modeling 
components, FIX will fail to classify signal components correctly).  If there 
are few signal components this means that either the SNR is very bad or the 
structured noise has overwhelmed the signal and mixed in too much with the 
signal, making it hard to separate.
  3.  DVARS Spikes above baseline (not dips below baseline) in the cleaned 
timeseries suggest residual noise.  I prefer DVARS derived measures to movement 
tracer derived measures because they tell you something about what is actually 
happening to the intensities inside the data, whereas movement tracers may be 
inaccurate reflections of signal intensity fluctuations for a variety of 
reasons (see Glasser et al 2018 Neuroimage: 
https://www.sciencedirect.com/science/article/pii/S1053811918303963 for 
examples).

Others in the HCP used different means to identify some of the noise components 
I mentioned above that weren’t being classified correctly by regular FIX, and 
might be able to share their suggestions.

Matt.

From: Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 8:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

Thank you Matt. Do you have some suggestions for the metrics to use to 
determine scan quality after ICA FIX?

Thanks,
Cherry

On Mon, Apr 22, 2019 at 8:15 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
I would decide after cleaning with MR ICA+FIX if you actually have to exclude 
the scans and run with them all.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 3:47 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Multi-run ICA-FIX with excessive movement

Dear HCP experts,

I am writing for a question with multi-run ICA-FIX for my dataset. I have 4 
resting state scans (TR=0.8, length=6.5min each) and 3 task scans (TR=0.8, 
length=6min each) that I intend to run multi-run ICA-FIX on. We used Euclidean 
norm values to threshold volumes with excessive movement and decided that scans 
with more than 20% volumes with excessive movement are not usable. I wonder 
with multi-run ICA-FIX, if it would be problematic to include these scans. In 
other words, I am trying to decide if I should 1) run multi-run ICA-FIX on 
scans with less motion, therefore each subject may have different number of 
scans that are included in multi-run ICA-FIX; or 2) run multi-run ICA-FIX on 
all scans, and throw out scans with excessive motion afterward.

Thank you very much,
Cherry

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthc

Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

2019-04-22 Thread Glasser, Matthew
I guess I haven’t been in the habit of throwing out data like this.  Things I 
would consider would include:

  1.  MR+FIX classification accuracy (if runs were poorly classified, they 
won’t be denoised well).  I’ll note that we are training an improved MR+FIX 
classifier using a combination of HCP-YA resting state (single run FIX), HCP-YA 
task (MR+FIX), and HCP Lifespan (MR+FIX) to address classification issues we 
have observed with very large numbers of components, subject with very large 
amounts of motion, and other artifacts that were not a part of the HCP-YA 
original training data.
  2.  Unusually small numbers of signal components (though note we found a 
recent subtle bug whereby if melodic does not finish mixture modeling 
components, FIX will fail to classify signal components correctly).  If there 
are few signal components this means that either the SNR is very bad or the 
structured noise has overwhelmed the signal and mixed in too much with the 
signal, making it hard to separate.
  3.  DVARS Spikes above baseline (not dips below baseline) in the cleaned 
timeseries suggest residual noise.  I prefer DVARS derived measures to movement 
tracer derived measures because they tell you something about what is actually 
happening to the intensities inside the data, whereas movement tracers may be 
inaccurate reflections of signal intensity fluctuations for a variety of 
reasons (see Glasser et al 2018 Neuroimage: 
https://www.sciencedirect.com/science/article/pii/S1053811918303963 for 
examples).

Others in the HCP used different means to identify some of the noise components 
I mentioned above that weren’t being classified correctly by regular FIX, and 
might be able to share their suggestions.

Matt.

From: Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 8:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

Thank you Matt. Do you have some suggestions for the metrics to use to 
determine scan quality after ICA FIX?

Thanks,
Cherry

On Mon, Apr 22, 2019 at 8:15 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
I would decide after cleaning with MR ICA+FIX if you actually have to exclude 
the scans and run with them all.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 3:47 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Multi-run ICA-FIX with excessive movement

Dear HCP experts,

I am writing for a question with multi-run ICA-FIX for my dataset. I have 4 
resting state scans (TR=0.8, length=6.5min each) and 3 task scans (TR=0.8, 
length=6min each) that I intend to run multi-run ICA-FIX on. We used Euclidean 
norm values to threshold volumes with excessive movement and decided that scans 
with more than 20% volumes with excessive movement are not usable. I wonder 
with multi-run ICA-FIX, if it would be problematic to include these scans. In 
other words, I am trying to decide if I should 1) run multi-run ICA-FIX on 
scans with less motion, therefore each subject may have different number of 
scans that are included in multi-run ICA-FIX; or 2) run multi-run ICA-FIX on 
all scans, and throw out scans with excessive motion afterward.

Thank you very much,
Cherry

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Multi-run ICA-FIX with excessive movement

2019-04-22 Thread Glasser, Matthew
I would decide after cleaning with MR ICA+FIX if you actually have to exclude 
the scans and run with them all.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yizhou Ma mailto:maxxx...@umn.edu>>
Date: Monday, April 22, 2019 at 3:47 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Multi-run ICA-FIX with excessive movement

Dear HCP experts,

I am writing for a question with multi-run ICA-FIX for my dataset. I have 4 
resting state scans (TR=0.8, length=6.5min each) and 3 task scans (TR=0.8, 
length=6min each) that I intend to run multi-run ICA-FIX on. We used Euclidean 
norm values to threshold volumes with excessive movement and decided that scans 
with more than 20% volumes with excessive movement are not usable. I wonder 
with multi-run ICA-FIX, if it would be problematic to include these scans. In 
other words, I am trying to decide if I should 1) run multi-run ICA-FIX on 
scans with less motion, therefore each subject may have different number of 
scans that are included in multi-run ICA-FIX; or 2) run multi-run ICA-FIX on 
all scans, and throw out scans with excessive motion afterward.

Thank you very much,
Cherry

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DeDriftAndResamplePipeline error

2019-04-19 Thread Glasser, Matthew
It does seem to have completed successfully.  I wonder if “case” doesn’t work 
the same on Mac?

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Thursday, April 18, 2019 at 11:19 PM
To: Timothy Coalson mailto:tsc...@mst.edu>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, "Harwell, 
John" mailto:jharw...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] DeDriftAndResamplePipeline error

I think it worked now. The script finished pretty fast without prompting an 
error on the screen except for the following found In 
DeDriftAndResamplePipeline.sh.e42974:

 # Do NOT wrap the following in quotes (o.w. the entire set of commands gets 
interpreted as a single string)
 |
Error: The input character is not valid in MATLAB statements or expressions.



 # Do NOT wrap the following in quotes (o.w. the entire set of commands gets 
interpreted as a single string)
 |
Error: The input character is not valid in MATLAB statements or expressions.


I am attaching the log files to make sure the DeDriftAndResamplePipeline.sh 
script was completed successfully.

Thanks a lot!

Leah.


On Apr 18, 2019, at 2:45 PM, Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:

Make sure you have the whole pipelines repo for 4.0.0, do not try to mix and 
match folders from different versions, and make sure your setup script is 
pointed to the 4.0.0 version when running things from 4.0.0.  The log_Warn 
function is defined inside global/scripts, and it should get sourced 
automatically based on HCPPIPEDIR, so make sure that is set correctly (pointed 
to the 4.0.0 version).

Tim


On Thu, Apr 18, 2019 at 1:39 PM Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:
Thanks for your response. And sorry to bother again with this issue but I am 
still getting the following error: ReApplyFixMultiRunPipeline.sh: line 592: 
log_Warn: command not found

Please find log files attached.

Pipelines for MR+FIX, MSMAll and DeDriftAndResample are from version version 
4.0.0.
PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface are from 
version  3_22
Since MR+FIX and MSMAll were run successfully, why it should be a version issue 
in ReApplyFixMultiRunPipeline.sh?

I want to be sure this is a version issue because I have already run 
PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface version  
3_22 on a sample of 30 patients pre/post tx.

Thanks a lot for your help and patience.

Leah.



On Apr 15, 2019, at 9:39 PM, Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:

I would also suggest changing your log level to INFO in wb_view, preferences 
(the wb_command option does not store the logging level change to preferences). 
 We should probably change the default level, or change the level of that 
volume coloring message.

Tim


On Mon, Apr 15, 2019 at 8:34 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
I have pushed a similar edit to reapply MR fix, please update to the latest 
master.

Tim


On Mon, Apr 15, 2019 at 8:27 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
They weren't instructions, I pushed an edit, and it was a different script.

Tim


On Mon, Apr 15, 2019 at 8:08 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
Here is the error:

readlink: illegal option -- f
usage: readlink [-n] [file ...]

I believe Tim already gave you instructions for this.

Also, the log_Warn line is again concerning as to whether you followed the 
installation instructions and all version 4.0.0 files here.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Monday, April 15, 2019 at 8:53 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>, Timothy 
Coalson mailto:tsc...@mst.edu>>, "Brown, Tim" 
mailto:tbbr...@wustl.edu>>
Subject: Re: [HCP-Users] DeDriftAndResamplePipeline error

I had to re-run DeDriftAndResamplePipeline twice because it was searching for 
settings.sh in the wrong place, and now I am getting the following error 
message:
ReApplyFixMultiRunPipeline.sh: line 586: log_Warn: command not found

I am attaching log files.

Does the folder containing fix1.067 need to include all the ICAFIX files?

Thanks a lot!

Leah.


On Apr 15, 2019, at 12:19 AM, Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:

It seams to be working now. Thanks a lot!

Leah.

On Apr 15, 2019, at 12:04 AM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

If you ran MR+FIX, you need to set these appropriately

MRFixConcatName="NONE"
MRFixNames="NONE"

And not set

fixNames="RS_fMRI_1 RS_fMRI_2" #Space delimited list or NONE

https://github.com/Washington-University/HCPpipelines/blob/master/Examples/Scripts/DeDriftAndResamplePipelineBatch.sh
Also it looks like line 124 needs an “s” on the end of the flag name to read 
--multirun-fix-concat-names=${MRFixConcatName}

M

Re: [HCP-Users] Assigning the results of -cifti-correlation to appropriate resting state networks

2019-04-17 Thread Glasser, Matthew
I think you guys already have the corresponding label file.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Wednesday, April 17, 2019 at 1:10 PM
To: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Assigning the results of -cifti-correlation to 
appropriate resting state networks

I don't know which parcels are assigned to each network, but if you need to 
know the current order of the parcels, wb_command -file-information will show 
that.

If you have a dlabel file with the networks as labels, you can put that through 
-cifti-parcellate to get each parcel labeled with its majority network, and if 
you want a dlabel file containing the reordering you used for the parcels, you 
can use -cifti-parcel-mapping-to-label.

Tim


On Wed, Apr 17, 2019 at 10:32 AM Jayasekera, Dinal 
mailto:dinal.jayasek...@wustl.edu>> wrote:

Dear all,


I have run a wb_command, specifically -cifti-parcellate, -cifti-reorder and 
-cifti-correlation, on some functional connectivity data to extract and plot an 
adjacency matrix on MATLAB. The matrix that is generated is a 360x360 (360 
parcels x 360 parcels) matrix and I'm trying to figure out how to identify the 
parcels of the matrix that represent each resting state network (at least 5 
RSNs for now).


Does anyone have a file/GUI that can enable me to automatically identify each 
RSN's parcels on MATLAB?


Kind regards,
Dinal Jayasekera

PhD Candidate | InSITE Fellow
Ammar Hawasli Lab
Department of Biomedical Engineering 
| Washington University in St. Louis

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] question about making RSFC matrices available

2019-04-16 Thread Glasser, Matthew
Hi Tim,

I guess we have treated such derivative results similarly and put them behind 
the HCP data use terms.  One easy solution would be to upload the results to 
the BALSA database and add the HCP data use terms.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Tuesday, April 16, 2019 at 5:11 PM
To: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Cc: "Curtiss, Sandy" mailto:scurt...@wustl.edu>>, 李婧玮 
mailto:allhappylife...@gmail.com>>, Thomas Yeo 
mailto:yeoye...@gmail.com>>, 
"hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] question about making RSFC matrices available

However, he is not sharing the HCP data files themselves, but only results 
obtained from using them.  The data use terms only state that the *original 
data* must be distributed under the same terms.  Derived data appears to only 
be covered by "all relevant rules and regulations imposed by my institution", 
and that paragraph looks like it mainly exists to remind users that the data is 
not considered de-identified.

Tim


On Tue, Apr 16, 2019 at 4:56 PM Burgess, Gregory 
mailto:gburg...@wustl.edu>> wrote:
I believe that the Open Access Data Use terms 
(https://www.humanconnectome.org/study/hcp-young-adult/document/wu-minn-hcp-consortium-open-access-data-use-terms)
 require that anyone receiving the data must have first agreed to the Open 
Access Data Use Terms. If my understanding is correct, that would mean that you 
would need to verify that the recipient has accepted the terms before sharing 
with them. I doubt that would be easy to manage on a public site.

--Greg


Greg Burgess, Ph.D.
Senior Scientist, Human Connectome Project
Washington University School of Medicine
Department of Psychiatry
Phone: 314-362-7864
Email: gburg...@wustl.edu

On Apr 16, 2019, at 1:24 PM, Thomas Yeo 
mailto:yeoye...@gmail.com>> wrote:

Hi,

Maybe this question has already been answered before, but we have been working 
on the HCP data and have computed our own derivatives, e.g., FC matrices of 
individual subjects.

Is it ok to share these matrices with accompanying HCP subject IDs on our 
personal github/website? If not, how do you suggest we can share these 
derivatives?

Thanks,
Thomas

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DeDriftAndResamplePipeline error

2019-04-15 Thread Glasser, Matthew
Here is the error:


readlink: illegal option -- f

usage: readlink [-n] [file ...]

I believe Tim already gave you instructions for this.

Also, the log_Warn line is again concerning as to whether you followed the 
installation instructions and all version 4.0.0 files here.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Monday, April 15, 2019 at 8:53 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>, Timothy 
Coalson mailto:tsc...@mst.edu>>, "Brown, Tim" 
mailto:tbbr...@wustl.edu>>
Subject: Re: [HCP-Users] DeDriftAndResamplePipeline error

I had to re-run DeDriftAndResamplePipeline twice because it was searching for 
settings.sh in the wrong place, and now I am getting the following error 
message:
ReApplyFixMultiRunPipeline.sh: line 586: log_Warn: command not found

I am attaching log files.

Does the folder containing fix1.067 need to include all the ICAFIX files?

Thanks a lot!

Leah.


On Apr 15, 2019, at 12:19 AM, Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:

It seams to be working now. Thanks a lot!

Leah.

On Apr 15, 2019, at 12:04 AM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

If you ran MR+FIX, you need to set these appropriately

MRFixConcatName="NONE"
MRFixNames="NONE"

And not set

fixNames="RS_fMRI_1 RS_fMRI_2" #Space delimited list or NONE

https://github.com/Washington-University/HCPpipelines/blob/master/Examples/Scripts/DeDriftAndResamplePipelineBatch.sh
Also it looks like line 124 needs an “s” on the end of the flag name to read 
--multirun-fix-concat-names=${MRFixConcatName}

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 10:56 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] DeDriftAndResamplePipeline error

Thanks a lot for your response.

I am running v.4.0.0 now. And I have set up the script as follow:

HighResMesh="164"
LowResMesh="32"
RegName="MSMAll_InitalReg_2_d40_WRN"
DeDriftRegFiles="${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.L.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii@${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.R.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii"
ConcatRegName="MSMAll_Test"
Maps="sulc curvature corrThickness thickness"
MyelinMaps="MyelinMap SmoothedMyelinMap" #No _BC, this will be reapplied
MRFixConcatName="NONE"
MRFixNames="NONE"
#fixNames="rfMRI_REST1_LR rfMRI_REST1_RL rfMRI_REST2_LR rfMRI_REST2_RL" #Space 
delimited list or NONE
fixNames="RS_fMRI_1 RS_fMRI_2" #Space delimited list or NONE
#dontFixNames="tfMRI_WM_LR tfMRI_WM_RL tfMRI_GAMBLING_LR tfMRI_GAMBLING_RL 
tfMRI_MOTOR_LR tfMRI_MOTOR_RL tfMRI_LANGUAGE_LR tfMRI_LANGUAGE_RL 
tfMRI_SOCIAL_LR tfMRI_SOCIAL_RL tfMRI_RELATIONAL_LR tfMRI_RELATIONAL_RL 
tfMRI_EMOTION_LR tfMRI_EMOTION_RL" #Space delimited list or NONE
dontFixNames="NONE"
SmoothingFWHM="2" #Should equal previous grayordinates smoothing (because we 
are resampling from unsmoothed native mesh timeseries)
HighPass="0"
MotionRegression=TRUE
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab, Mode=2 octave
#MatlabMode="0" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab, Mode=2 
octave

But the script does not run and it is aborted with the following message:
DeDriftAndResamplePipeline.sh - ABORTING: unrecognized option: 
--multirun-fix-concat-name=NONE

I am attaching the log files.

Leah.


On Apr 14, 2019, at 11:10 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

In this case you do run it with the individual fMRI names and that doesn’t 
look like the version 4.0.0 example script...

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 10:06 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] DeDriftAndResamplePipeline error

Dear Experts,

I have run DeDriftAndResamplePipelineBatch.sh from from 
${StudyFolder}/${Subject}/scripts after running MSMAII and getting the 
following error:

While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -metric-resample 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR.L.native.func.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Native/NTTMS_s002_170812.L.sphere.MSMAll_Test.native.surf.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/fsaverage_LR32k/NTTMS_s002_170812.L.sphere.32k_fs_LR.surf.gii
 ADAP_BARY_AREA 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/R

Re: [HCP-Users] Motion files missing for the HCP data

2019-04-15 Thread Glasser, Matthew
This is an excellent question for the HCP users list where folks who know the 
REST interface better than me can help.

Matt.

From: "Li, Jin" 
mailto:li.9...@buckeyemail.osu.edu>>
Date: Monday, April 15, 2019 at 2:53 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Motion files missing for the HCP data

Dear Dr. Glasser,

Sorry to reaching out directly via email. I am Jin Li and I am in the Saygin 
Cognitive Neuroscience Lab in OSU. I am currently working on the HCP data and 
want to use the motion files. I looked through your document and it said there 
should be Movement_*.txt files in the FIX_extended package. But I when I 
download the file through connectomeDB, I did’t see them.

I also tried this:

 curl -u username:password -O 
https://db.humanconnectome.org/data/archive/projects/HCP_1200/subjects/100307/experiments/100307_CREST/resources/100307_CREST/files/MNINonLinear/Results/rfMRI_REST1_LR/Movement_Regressors.txt

Error message in the output file:

Tomcat/6.0.36 - Error report 
HTTP Status 401 - Login attempt failed. Please try 
again.type Status 
reportmessage Login attempt failed. Please try 
again.description This request requires HTTP 
authentication.Apache 
Tomcat/6.0.36


Just want to know if you have any recommendations to get the access of these 
files.

Thank you very much in advance

Sincerely,
Jin Li



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DeDriftAndResamplePipeline error

2019-04-14 Thread Glasser, Matthew
If you ran MR+FIX, you need to set these appropriately

MRFixConcatName="NONE"
MRFixNames="NONE"

And not set

fixNames="RS_fMRI_1 RS_fMRI_2" #Space delimited list or NONE

https://github.com/Washington-University/HCPpipelines/blob/master/Examples/Scripts/DeDriftAndResamplePipelineBatch.sh
Also it looks like line 124 needs an “s” on the end of the flag name to read 
--multirun-fix-concat-names=${MRFixConcatName}

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 10:56 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] DeDriftAndResamplePipeline error

Thanks a lot for your response.

I am running v.4.0.0 now. And I have set up the script as follow:

HighResMesh="164"
LowResMesh="32"
RegName="MSMAll_InitalReg_2_d40_WRN"
DeDriftRegFiles="${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.L.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii@${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.R.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii"
ConcatRegName="MSMAll_Test"
Maps="sulc curvature corrThickness thickness"
MyelinMaps="MyelinMap SmoothedMyelinMap" #No _BC, this will be reapplied
MRFixConcatName="NONE"
MRFixNames="NONE"
#fixNames="rfMRI_REST1_LR rfMRI_REST1_RL rfMRI_REST2_LR rfMRI_REST2_RL" #Space 
delimited list or NONE
fixNames="RS_fMRI_1 RS_fMRI_2" #Space delimited list or NONE
#dontFixNames="tfMRI_WM_LR tfMRI_WM_RL tfMRI_GAMBLING_LR tfMRI_GAMBLING_RL 
tfMRI_MOTOR_LR tfMRI_MOTOR_RL tfMRI_LANGUAGE_LR tfMRI_LANGUAGE_RL 
tfMRI_SOCIAL_LR tfMRI_SOCIAL_RL tfMRI_RELATIONAL_LR tfMRI_RELATIONAL_RL 
tfMRI_EMOTION_LR tfMRI_EMOTION_RL" #Space delimited list or NONE
dontFixNames="NONE"
SmoothingFWHM="2" #Should equal previous grayordinates smoothing (because we 
are resampling from unsmoothed native mesh timeseries)
HighPass="0"
MotionRegression=TRUE
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab, Mode=2 octave
#MatlabMode="0" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab, Mode=2 
octave

But the script does not run and it is aborted with the following message:
DeDriftAndResamplePipeline.sh - ABORTING: unrecognized option: 
--multirun-fix-concat-name=NONE

I am attaching the log files.

Leah.


On Apr 14, 2019, at 11:10 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

In this case you do run it with the individual fMRI names and that doesn’t 
look like the version 4.0.0 example script...

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 10:06 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] DeDriftAndResamplePipeline error

Dear Experts,

I have run DeDriftAndResamplePipelineBatch.sh from from 
${StudyFolder}/${Subject}/scripts after running MSMAII and getting the 
following error:

While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -metric-resample 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR.L.native.func.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Native/NTTMS_s002_170812.L.sphere.MSMAll_Test.native.surf.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/fsaverage_LR32k/NTTMS_s002_170812.L.sphere.32k_fs_LR.surf.gii
 ADAP_BARY_AREA 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_MSMAll_Test.L.atlasroi.32k_fs_LR.func.gii
 -area-surfs 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/T1w/Native/NTTMS_s002_170812.L.midthickness.native.surf.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/T1w/fsaverage_LR32k/NTTMS_s002_170812.L.midthickness_MSMAll_Test.32k_fs_LR.surf.gii
 -current-roi 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Native/NTTMS_s002_170812.L.roi.native.shape.gii

ERROR: NAME OF FILE: RS_fMRI_MR.L.native.func.gii
PATH TO FILE: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR

File does not exist.

I set up the script DeDriftAndResamplePipelineBatch.sh as follow 
(rfMRINames=“RS_fMRI_MR”, the concatenated name from MR+FIX):

HighResMesh="164"
LowResMesh="32"
RegName="MSMAll_InitalReg_2_d40_WRN"
DeDriftRegFiles="${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.L.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii@${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.R.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii"
ConcatRegName="MSMAll_Test"
Maps="sulc curvature corrThickness thickness"
MyelinMaps="MyelinMap SmoothedMyelinMap" #No _BC,

Re: [HCP-Users] DeDriftAndResamplePipeline error

2019-04-14 Thread Glasser, Matthew
In this case you do run it with the individual fMRI names and that doesn’t look 
like the version 4.0.0 example script...

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 10:06 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] DeDriftAndResamplePipeline error

Dear Experts,

I have run DeDriftAndResamplePipelineBatch.sh from from 
${StudyFolder}/${Subject}/scripts after running MSMAII and getting the 
following error:

While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -metric-resample 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR.L.native.func.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Native/NTTMS_s002_170812.L.sphere.MSMAll_Test.native.surf.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/fsaverage_LR32k/NTTMS_s002_170812.L.sphere.32k_fs_LR.surf.gii
 ADAP_BARY_AREA 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_MSMAll_Test.L.atlasroi.32k_fs_LR.func.gii
 -area-surfs 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/T1w/Native/NTTMS_s002_170812.L.midthickness.native.surf.gii
 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/T1w/fsaverage_LR32k/NTTMS_s002_170812.L.midthickness_MSMAll_Test.32k_fs_LR.surf.gii
 -current-roi 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Native/NTTMS_s002_170812.L.roi.native.shape.gii

ERROR: NAME OF FILE: RS_fMRI_MR.L.native.func.gii
PATH TO FILE: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR

File does not exist.

I set up the script DeDriftAndResamplePipelineBatch.sh as follow 
(rfMRINames=“RS_fMRI_MR”, the concatenated name from MR+FIX):

HighResMesh="164"
LowResMesh="32"
RegName="MSMAll_InitalReg_2_d40_WRN"
DeDriftRegFiles="${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.L.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii@${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.R.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii"
ConcatRegName="MSMAll_Test"
Maps="sulc curvature corrThickness thickness"
MyelinMaps="MyelinMap SmoothedMyelinMap" #No _BC, this will be reapplied
rfMRINames="RS_fMRI_MR" #Space delimited list or NONE
tfMRINames="NONE"
SmoothingFWHM="2" #Should equal previous grayordiantes smoothing (because we 
are resampling from unsmoothed native mesh timeseries
HighPass="0"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

I am attaching the log files.

Thanks a lot!

Leah.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users





The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Subject list for the HCP Q2 release

2019-04-14 Thread Glasser, Matthew
We might be able to dig that up, but perhaps if you contacted the authors they 
might simply have the list they used in an old script?  Also, it is worth 
keeping in mind that subjects who were released were occasionally later 
excluded for one reason or another.  Thus, it may not be possible to get the 
exact same set of subjects from the HCP DB now.  An alternative replication 
strategy would be to attempt to replicate the authors' analysis with a larger 
dataset and see if it still holds up.  This might end up making the more 
powerful replication statement and avoid the above issues.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Manasij Venkatesh mailto:mana...@umd.edu>>
Date: Sunday, April 14, 2019 at 11:02 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Subject list for the HCP Q2 release

Hello,

I'm trying to replicate some findings from this very interesting Nature 
Neuroscience paper:  https://www.nature.com/articles/nn.4135

The following is in the subject information:
>  HCP data. We used the Q2 HCP data release, which was all the HCP data 
> publicly available at the time that this project began. The full Q2 release 
> contains data on 142 healthy subjects; we restricted our analysis to subjects 
> for whom all six fMRI sessions were available (n = 126; 40 males, age 22–35).

Unfortunately, I'm unable to find the list of subjects in the Q2 release. More 
specifically, the subjects that had all six fMRI sessions that were recorded at 
that time. Can you please help me find these subjects?

Sincerely,
Manasij

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MSMAllPipeline error

2019-04-14 Thread Glasser, Matthew
You can delete the --verbose from all the cp lines.  All these flags appear not 
to be compatible with mac os x built in utilities.

You certainly should actually download the entire pipelines 4.0.0 release and 
follow the instructions there.  Running code from multiple versions is likely 
to create problems, though the issues with mac that you have encountered thus 
far are likely still there.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Sunday, April 14, 2019 at 2:17 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>, Timothy 
Coalson mailto:tsc...@mst.edu>>
Subject: Re: [HCP-Users] MSMAllPipeline error

I downloaded the last version of MSMAll to see if is a version problem. The 
script now gives me the following error:
Sun Apr 14 03:11:18 EDT 2019 - MSMAllPipeline.sh - HCPPIPEDIR: 
/usr/local/bin/HCP/Connectome_Project_3_22/Pipelines
Sun Apr 14 03:11:18 EDT 2019 - MSMAllPipeline.sh - ABORTING: MSMCONFIGDIR 
environment variable must be set

How should i configure the environment variable: MSMCONFIGDIR?

Thanks!

Leah.

On Apr 14, 2019, at 2:54 AM, Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:

Thanks for your response!

Sorry to bother you but still I am getting an error. Now it says:
cp: illegal option -- -
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
   cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file … target_directory

In cat MSMAllPipeline.sh.o31773, says:
(…)
un Apr 14 02:52:14 EDT 2019 - MSMAll.sh - RSNTargetFile: 
/usr/local/bin/gaurav_folder_new/HCP/Connectome_Project_3_22/Pipelines/global/templates/MSMAll/rfMRI_REST_Atlas_MSMAll_2_d41_WRN_DeDrift_hp2000_clean_PCA.ica_d40_ROW_vn/melodic_oIC.dscalar.nii
Sun Apr 14 02:52:14 EDT 2019 - MSMAll.sh - File: 
/usr/local/bin/gaurav_folder_new/HCP/Connectome_Project_3_22/Pipelines/global/templates/MSMAll/rfMRI_REST_Atlas_MSMAll_2_d41_WRN_DeDrift_hp2000_clean_PCA.ica_d40_ROW_vn/melodic_oIC.dscalar.nii
 EXISTS
Sun Apr 14 02:52:14 EDT 2019 - MSMAll.sh - RSNCostWeights: 
/usr/local/bin/gaurav_folder_new/HCP/Connectome_Project_3_22/Pipelines/global/templates/MSMAll/rfMRI_REST_Atlas_MSMAll_2_d41_WRN_DeDrift_hp2000_clean_PCA.ica_d40_ROW_vn/Weights.txt
Sun Apr 14 02:52:14 EDT 2019 - MSMAll.sh - File: 
/usr/local/bin/gaurav_folder_new/HCP/Connectome_Project_3_22/Pipelines/global/templates/MSMAll/rfMRI_REST_Atlas_MSMAll_2_d41_WRN_DeDrift_hp2000_clean_PCA.ica_d40_ROW_vn/Weights.txt
 EXISTS

Leah.


On Apr 13, 2019, at 4:45 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

https://github.com/Washington-University/HCPpipelines/blob/master/MSMAll/scripts/SingleSubjectConcat.sh

Line 473 apparently has to be -p on a mac

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 3:32 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response.

I changed line 353 to NO. Now I am getting a different error:
mkdir: illegal option -- -
usage: mkdir [-pv] [-m mode] directory …

In cat MSMAllPipeline.sh.o9037, it stops in:
(…)
Operating System: Apple OSX
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Using named parameters
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Study Folder: 
/Volumes/data/data3/NTTMS/NTTMS_s002
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Subject ID: 
NTTMS_s002_170812
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRI name list: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - ICA+FIX highpass 
setting: 0
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Output fMRI Name: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRI Proc String: 
_Atlas_hp0_clean
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Output Proc String: _vn
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Demean: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Variance Normalization: 
NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Compute Variance 
Normalization: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Revert Bias Field: NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - MATLAB run mode: 1
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - StudyFolder: 
/Volumes/data/data3/NTTMS/NTTMS_s002
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Subject: 
NTTMS_s002_170812
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRINames: RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - HighPass: 0
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - OutputfMRIName: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRIProcSTRING: 
_Atlas_hp0_clean

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
https://github.com/Washington-University/HCPpipelines/blob/master/MSMAll/scripts/SingleSubjectConcat.sh

Line 473 apparently has to be -p on a mac

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 3:32 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response.

I changed line 353 to NO. Now I am getting a different error:
mkdir: illegal option -- -
usage: mkdir [-pv] [-m mode] directory …

In cat MSMAllPipeline.sh.o9037, it stops in:
(…)
Operating System: Apple OSX
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Using named parameters
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Study Folder: 
/Volumes/data/data3/NTTMS/NTTMS_s002
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Subject ID: 
NTTMS_s002_170812
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRI name list: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - ICA+FIX highpass 
setting: 0
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Output fMRI Name: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRI Proc String: 
_Atlas_hp0_clean
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Output Proc String: _vn
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Demean: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Variance Normalization: 
NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Compute Variance 
Normalization: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Revert Bias Field: NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - MATLAB run mode: 1
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - StudyFolder: 
/Volumes/data/data3/NTTMS/NTTMS_s002
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Subject: 
NTTMS_s002_170812
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRINames: RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - HighPass: 0
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - OutputfMRIName: 
RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRIProcSTRING: 
_Atlas_hp0_clean
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - OutputProcSTRING: _vn
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Demean: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - VarianceNormalization: 
NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - 
ComputeVarianceNormalization: YES
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - RevertBiasField: NO
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - MatlabRunMode: 1
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRINames: RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - OutputProcSTRING: _vn
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - AtlasFolder: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - OutputFolder: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - Caret7_Command: 
/Applications/workbench/bin_macosx64//wb_command
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - fMRIName: RS_fMRI_MR
Sat Apr 13 16:29:45 EDT 2019 - SingleSubjectConcat.sh - ResultsFolder: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR
Sat Apr 13 16:29:46 EDT 2019 - SingleSubjectConcat.sh - MATH: ((TCS - Mean))
parsed '((TCS - Mean))' as '(TCS-Mean)'

Leah.

On Apr 13, 2019, at 4:10 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

You could try setting line 353 to NO:

https://github.com/Washington-University/HCPpipelines/blob/master/MSMAll/MSMAllPipeline.sh

For more recent versions of FIX and MR+FIX, variance normalization is always 
already computed.  That said, it isn’t clear to me why it is failing to compute 
it again.  Perhaps you are not reporting the first error that occurs.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:55 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response.

I am using now the concatenated name, called “RS_fMRI_MR” when I run MR ICA+FIX:
fMRINames="RS_fMRI_MR"
OutfMRIName=“RS_fMRI_MR"
(…)

but still getting same error:
While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -cifti-math '((TCS - Mean)) / max(VN,0.001)' 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn.dtseries.n

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
You could try setting line 353 to NO:

https://github.com/Washington-University/HCPpipelines/blob/master/MSMAll/MSMAllPipeline.sh

For more recent versions of FIX and MR+FIX, variance normalization is always 
already computed.  That said, it isn’t clear to me why it is failing to compute 
it again.  Perhaps you are not reporting the first error that occurs.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:55 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response.

I am using now the concatenated name, called “RS_fMRI_MR” when I run MR ICA+FIX:
fMRINames="RS_fMRI_MR"
OutfMRIName=“RS_fMRI_MR"
(…)

but still getting same error:
While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -cifti-math '((TCS - Mean)) / max(VN,0.001)' 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn.dtseries.nii
 -var TCS 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean.dtseries.nii
 -var Mean 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_mean.dscalar.nii
 -select 1 1 -repeat -var VN 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii
 -select 1 1 -repeat

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii',
 file does not exist, or folder permissions prevent seeing it

Leah.

***
Leah Moreno, PhD
Research Scientist
Division of Experimental Therapeutics
Department of Psychiatry
Columbia University Medical Center
1051 Riverside Drive, Unit 21
New York, NY 10032
phone: (914) 218-7311
email: mm4...@cumc.columbia.edu<mailto:mm4...@cumc.columbia.edu>

On Apr 13, 2019, at 3:39 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

If you ran MR+FIX, your data are already concatenated for MSMAll, so just 
provide the concatenated fMRIName from the MR+FIX run.  It should work then.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:37 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response!

I changed the set up as follow:
fMRINames="RS_fMRI_1@RS_fMRI_2"
OutfMRIName="RS_fMRI_1@RS_fMRI_2"
HighPass="0"
fMRIProcSTRING="_Atlas_hp0_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

but still getting same error:
While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -cifti-math '((TCS - Mean)) / max(VN,0.001)' 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn.dtseries.nii
 -var TCS 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean.dtseries.nii
 -var Mean 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_mean.dscalar.nii
 -select 1 1 -repeat -var VN 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn_tempcompute.dscalar.nii
 -select 1 1 -repeat

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn_tempcompute.dscalar.nii',
 file does not exist, or folder permissions prevent seeing it

Leah.

On Apr 13, 2019, at 3:22 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

You can just put your MR+FIX concatenated name in for the fMRIName and 
OutfMRIName.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:15 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response!

I am running the following script MSMAllPipelineBatch.sh from 
${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success

I set up the script as follow:
fMRINames="

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
If you ran MR+FIX, your data are already concatenated for MSMAll, so just 
provide the concatenated fMRIName from the MR+FIX run.  It should work then.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:37 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response!

I changed the set up as follow:
fMRINames="RS_fMRI_1@RS_fMRI_2"
OutfMRIName="RS_fMRI_1@RS_fMRI_2"
HighPass="0"
fMRIProcSTRING="_Atlas_hp0_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

but still getting same error:
While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -cifti-math '((TCS - Mean)) / max(VN,0.001)' 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn.dtseries.nii
 -var TCS 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean.dtseries.nii
 -var Mean 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_mean.dscalar.nii
 -select 1 1 -repeat -var VN 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn_tempcompute.dscalar.nii
 -select 1 1 -repeat

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_1/RS_fMRI_1_Atlas_hp0_clean_vn_tempcompute.dscalar.nii',
 file does not exist, or folder permissions prevent seeing it

Leah.

On Apr 13, 2019, at 3:22 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

You can just put your MR+FIX concatenated name in for the fMRIName and 
OutfMRIName.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:15 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response!

I am running the following script MSMAllPipelineBatch.sh from 
${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success

I set up the script as follow:
fMRINames="RS_fMRI_MR"
OutfMRIName="RS_fMRI_MR_REST"
HighPass=“0"
fMRIProcSTRING="_Atlas_hp0_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

In "cat MSMAllPipeline.sh.o3905", it does not complete the following step:
Sat Apr 13 15:09:36 EDT 2019 - SingleSubjectConcat.sh - OutputVN: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii

(…)

And stops in:
>> Sat Apr 13 15:09:44 EDT 2019 - SingleSubjectConcat.sh - 
>> ComputeVN('/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean.dtseries.nii','NONE','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_hp0.ica/filtered_func_data.ica/melodic_mix','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_hp0.ica/.fix','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii','/Applications/workbench/bin_macosx64//wb_command');
Sat Apr 13 15:09:44 EDT 2019 - SingleSubjectConcat.sh - MATH: ((TCS - Mean)) / 
max(VN,0.001)

Leah.


On Apr 13, 2019, at 3:02 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

I guess specify how you called the MSMAll pipeline and perhaps that will 
provide a clue to the error if there are no other errors.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 1:58 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Dear Experts,

I re-run MR ICA+FIX with hp=0 without errors but I am getting same error as 
before when running MSMAll; i.e. the file “*vn_tempcompute.dscalar.nii" does 
not exist. Please advice.

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_1

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
You can just put your MR+FIX concatenated name in for the fMRIName and 
OutfMRIName.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 2:15 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" mailto:mha...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Thanks for your response!

I am running the following script MSMAllPipelineBatch.sh from 
${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success

I set up the script as follow:
fMRINames="RS_fMRI_MR"
OutfMRIName="RS_fMRI_MR_REST"
HighPass=“0"
fMRIProcSTRING="_Atlas_hp0_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

In "cat MSMAllPipeline.sh.o3905", it does not complete the following step:
Sat Apr 13 15:09:36 EDT 2019 - SingleSubjectConcat.sh - OutputVN: 
/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii

(…)

And stops in:
>> Sat Apr 13 15:09:44 EDT 2019 - SingleSubjectConcat.sh - 
>> ComputeVN('/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean.dtseries.nii','NONE','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_hp0.ica/filtered_func_data.ica/melodic_mix','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_hp0.ica/.fix','/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii','/Applications/workbench/bin_macosx64//wb_command');
Sat Apr 13 15:09:44 EDT 2019 - SingleSubjectConcat.sh - MATH: ((TCS - Mean)) / 
max(VN,0.001)

Leah.


On Apr 13, 2019, at 3:02 PM, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:

I guess specify how you called the MSMAll pipeline and perhaps that will 
provide a clue to the error if there are no other errors.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 1:58 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Dear Experts,

I re-run MR ICA+FIX with hp=0 without errors but I am getting same error as 
before when running MSMAll; i.e. the file “*vn_tempcompute.dscalar.nii" does 
not exist. Please advice.

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii',
 file does not exist, or folder permissions prevent seeing it

The file that exists is: RS_fMRI_MR_Atlas_hp0_clean_vn.dscalar.nii

Thanks!,

Leah.



On Apr 13, 2019, at 12:33 PM, Harms, Michael 
mailto:mha...@wustl.edu>> wrote:


We extended that feature such that it should be an accepted option for all the 
"ICAFIX"-related scripts, but we haven't had a chance yet to extend it to the 
context of MSMAll and TaskAnalysis.  Hopefully in the near future...

--
Michael Harms, Ph.D.

---

Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110      Email: 
mha...@wustl.edu<mailto:mha...@wustl.edu>

On 4/13/19, 11:28 AM, 
"hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of Glasser, Matthew" 
mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:

I wouldn¹t use hp=pd2 unless you know what you are doing, as that option
has not been fully tested.  I run with hp=0.

Matt.

On 4/13/19, 10:41 AM, 
"hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of
Marta Moreno" 
mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of
mmorenoort...@icloud.com<mailto:mmorenoort...@icloud.com>> wrote:

Dear Experts,

I am running the following script MSMAllPipelineBatch.sh from
${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success,
and I am getting the following error:

ERROR: failed to open file
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Resul
ts/RS_fMRI_MR/RS_fMRI_MR_Atlas_hppd2_clean_vn_tempcompute.dscalar.nii',
file does not exist, or folder permissions prevent seeing it

I set up the

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
I guess specify how you called the MSMAll pipeline and perhaps that will 
provide a clue to the error if there are no other errors.

Matt.

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 13, 2019 at 1:58 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSMAllPipeline error

Dear Experts,

I re-run MR ICA+FIX with hp=0 without errors but I am getting same error as 
before when running MSMAll; i.e. the file “*vn_tempcompute.dscalar.nii" does 
not exist. Please advice.

ERROR: failed to open file 
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Results/RS_fMRI_MR/RS_fMRI_MR_Atlas_hp0_clean_vn_tempcompute.dscalar.nii',
 file does not exist, or folder permissions prevent seeing it

The file that exists is: RS_fMRI_MR_Atlas_hp0_clean_vn.dscalar.nii

Thanks!,

Leah.



On Apr 13, 2019, at 12:33 PM, Harms, Michael 
mailto:mha...@wustl.edu>> wrote:


We extended that feature such that it should be an accepted option for all the 
"ICAFIX"-related scripts, but we haven't had a chance yet to extend it to the 
context of MSMAll and TaskAnalysis.  Hopefully in the near future...

--
Michael Harms, Ph.D.

---

Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu<mailto:mha...@wustl.edu>

On 4/13/19, 11:28 AM, 
"hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of Glasser, Matthew" 
mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:

I wouldn¹t use hp=pd2 unless you know what you are doing, as that option
has not been fully tested.  I run with hp=0.

Matt.

On 4/13/19, 10:41 AM, 
"hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of
Marta Moreno" 
mailto:hcp-users-boun...@humanconnectome.org>
 on behalf of
mmorenoort...@icloud.com<mailto:mmorenoort...@icloud.com>> wrote:

Dear Experts,

I am running the following script MSMAllPipelineBatch.sh from
${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success,
and I am getting the following error:

ERROR: failed to open file
'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Resul
ts/RS_fMRI_MR/RS_fMRI_MR_Atlas_hppd2_clean_vn_tempcompute.dscalar.nii',
file does not exist, or folder permissions prevent seeing it

I set up the script as follow:

fMRINames="RS_fMRI_MR"
OutfMRIName="RS_fMRI_MR_REST"
HighPass="pd2"
fMRIProcSTRING="_Atlas_hppd2_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab

I checked and the file called: *tempcompute.dscalar.nii¹, is not there.

What am I doing wrong? Something went wrong in the previous step while
running MR ICA+FIX that I am not aware of?

Thanks a lot!

Leah.



___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If

Re: [HCP-Users] MSMAllPipeline error

2019-04-13 Thread Glasser, Matthew
I wouldn¹t use hp=pd2 unless you know what you are doing, as that option
has not been fully tested.  I run with hp=0.

Matt.

On 4/13/19, 10:41 AM, "hcp-users-boun...@humanconnectome.org on behalf of
Marta Moreno"  wrote:

>Dear Experts,
>
>I am running the following script MSMAllPipelineBatch.sh from
>${StudyFolder}/${Subject}/scripts after running MR ICA+FIX with success,
>and I am getting the following error:
>
>ERROR: failed to open file
>'/Volumes/data/data3/NTTMS/NTTMS_s002/NTTMS_s002_170812/MNINonLinear/Resul
>ts/RS_fMRI_MR/RS_fMRI_MR_Atlas_hppd2_clean_vn_tempcompute.dscalar.nii',
>file does not exist, or folder permissions prevent seeing it
>
>I set up the script as follow:
>
>fMRINames="RS_fMRI_MR"
>OutfMRIName="RS_fMRI_MR_REST"
>HighPass="pd2"
>fMRIProcSTRING="_Atlas_hppd2_clean"
>MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
>RegName="MSMAll_InitalReg"
>HighResMesh="164"
>LowResMesh="32"
>InRegName="MSMSulc"
>MatlabMode="1" #Mode=0 compiled Matlab, Mode=1 interpreted Matlab
>
>I checked and the file called: *tempcompute.dscalar.nii¹, is not there.
>
>What am I doing wrong? Something went wrong in the previous step while
>running MR ICA+FIX that I am not aware of?
>
>Thanks a lot!
>
>Leah.
>
>
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] beta bias correction

2019-04-12 Thread Glasser, Matthew
This wasn’t done on any of the released HCP data.  When we released temporal 
ICA cleaned fMRI data I plan to include this.

The version of the HCP pipelines that has this correction was coded up after 
the HCP data were processed, so it is not in the 3T HCP data.  I believe the 7T 
data do have this in all of the fMRI.  Future lifespan HCP data releases will 
also have this correction.

It is possible to scale any of the data appropriately after the fact, and this 
issue affects only things like variance maps and betas (i.e not z-stats).

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Aaron R mailto:aaro...@gmail.com>>
Date: Friday, April 12, 2019 at 5:17 PM
To: hcp-users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] beta bias correction

Dear HCP Users,

Regarding bias field correction of fMRI data, in Glasser et al. (Supplementary 
Methods For
A Multi-modal Parcellation of Human Cerebral Cortex), it states:

"The scaled [bias] field was used as a reference BOLD intensity image when 
computing bias free beta effect size maps"

Was this done only on RS data, or also task HCP betas released? Where in the 
processing and how was this accomplished? I didn't find anything in the 
Pipelines.

Thank you,
Aaron

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Surface Parcelations Area

2019-04-12 Thread Glasser, Matthew
wb_command -cifti-weighted-stats with the areas as binary ROIs and the 
appropriate surfaces in the case of individuals or group average vertex areas 
in the case of the group will give the surface areas.  I don’t know that you 
can do it directly from a dlabel, but can convert a dlabel to binary ROIs with 
wb_command -cifti-all-labels-to-rois.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Kashyap, Amrit" mailto:akas...@emory.edu>>
Date: Friday, April 12, 2019 at 4:28 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Surface Parcelations Area


Hey HCP users,

I was wondering if you all knew of a quick method of getting the surface area 
of a cifti file (dlabel). I noticed there was a cifti-label-adjacency which 
gets you the boundary length between two areas, which is pretty neat and was 
hoping maybe there is a easy command that would give me the surface area of 
each parcellations,

Thanks

Amrit



This e-mail message (including any attachments) is for the sole use of
the intended recipient(s) and may contain confidential and privileged
information. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution
or copying of this message (including any attachments) is strictly
prohibited.

If you have received this message in error, please contact
the sender by reply e-mail message and destroy all copies of the
original message (including attachments).

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] connectome for monkeys // fiber trajectories

2019-04-11 Thread Glasser, Matthew
The main current limitation with the probabilistic trajectory feature (which 
tracks exactly which fiber is chosen by the tractography algorithm out of the 
1-3 modeled fibers in each voxel) is that it only works with single threaded 
probtrackx2, and not with the GPU accelerated version.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Thursday, April 11, 2019 at 1:41 PM
To: DE CASTRO Vanessa 
mailto:vanessa.decas...@cnrs.fr>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] connectome for monkeys // fiber trajectories

Connectome workbench is agnostic to species, though there are some defaults 
(identification symbol size) which default to a size suited to the human brain. 
 We frequently use it with primate data.

Workbench can display probabilistic trajectories generated with fsl's 
bedpostx/probtrackx tools (for the specific type of output that saves the fiber 
orientations used per-seed and per-voxel, matrix4 I think), but I don't think 
we have made a tutorial for it (and it still has some rough edges, as it hasn't 
been a priority for us).  The wb_command -convert-matrix4-to-workbench-sparse 
and -convert-fiber-orientations (or -estimate-fiber-binghams for starting with 
just the direction samples, but is less accurate) commands are the starting 
points - once you have converted the files to workbench formats with the 
expected extensions (currently .trajTEMP.wbsparse and .fiberTEMP.nii), loaded 
them into wb_view, enabled them in features or layers (I don't recall exactly 
how), clicking on a seed point will display the trajectories to/from it.

There is also wb_command -probtrackx-dot-convert, which allows converting the 
other probtrackx matrix types to cifti files (can show how much each 
voxel/vertex is used for a seed, but can't be displayed the way the trajectory 
files can).

Tim


On Thu, Apr 11, 2019 at 9:23 AM DE CASTRO Vanessa 
mailto:vanessa.decas...@cnrs.fr>> wrote:
Hi! I've started to work with the human connectome workbench, and I was 
wondering if is ready to use with monkeys as well, like Caret.

And I also read in the tutorial that you are already working in a new feature: 
probabilistic fiber trajectories... how soon it will come?? :D

Thank you very much for everything.

Sinceresly yours,

--

Vanessa DeCastro, PhD

Centre de Recherche Cerveau et Cognition - UMR 5549 - CNRS
Pavillon Baudot CHU Purpan
31052 Toulouse Cedex 03, France


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] An error about subcorticalprocessinginHCPfMRI-surface pipeline

2019-04-10 Thread Glasser, Matthew
It is hard to keep track of these threads when they don’t contain the prior 
info below.  I did find that you are not using FSL 6.0.1, so I would retry with 
that.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Wednesday, April 10, 2019 at 9:24 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] An error about subcorticalprocessinginHCPfMRI-surface 
pipeline

Hi Matt,

I am very grateful for your kindness.

After looked through files in working directory by fslinfo, I found a image, 
named T1_store.2.nii.gz, which I think is the first file be generated in wrong 
space (91x108x91).

Given the name of the file, I traced back the command line in the fMRIvolume 
script. Finally, I found Line 126 in OneStepResampling.sh which is the command 
line to generate the file.
#
${FSLDIR}/bin/flirt -interp spline -in ${T1wImage} -ref ${T1wImage} 
-applyisoxfm ${FinalfMRIResolution} -out 
${WD}/${T1wImageFile}.${FinalfMRIResolution}
#---

So I tested this command line independently using the image in its log file. 
Repeatedly, the result file was also generated in 91x108x91 space. I noticed 
that in this script, the fMRI image would be resampled reference to 
T1_store.2.nii.gz. I think because of the wrong space of T1_store.2.nii.gz, it 
also effected the fMRI resampling image.

However, I notice a comment above Line 126 in OneStepResampling.sh.
#--
NB: don't use FLIRT to do spline interpolation with -applyisoxfm for the 2mm 
and 1 mm cases because it dosen't know the peculiarities of the MNI template 
FOVs
#--
I wonder if it give a cue to the problem?

Sincerely,
Qunjun


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Query: Error: Spin echo fieldmap has different dimensions than scout image, this requires a manual fix

2019-04-10 Thread Glasser, Matthew
If you run fslhd on the SBRef and SE fieldmap images, are they different?  
Ideally these are matched exactly on the scanner.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Bourguignon, Nicolas" 
mailto:nicolas.bourguig...@uconn.edu>>
Date: Wednesday, April 10, 2019 at 3:40 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Query: Error: Spin echo fieldmap has different dimensions 
than scout image, this requires a manual fix


Hi HCP people,

Still a newbie on the use of HCP pipelines, please excuse me if my 
question/comment is trivial.

I'm currently attempting to run a fMRIVolume pipeline on an fMRI dataset using 
spin echo field maps (AP - PA). This pipeline worked fine the very first time I 
ran it, but now it stops running when calling for the TopupPreprocessingAll.sh, 
with the following error:

Error: Spin echo fieldmap has different dimensions than scout image, this 
requires a manual fix.

A more complete version of the report is:

Wed Apr 10 15:44:04 EDT 2019 - MotionCorrection.sh: Change names of all 
matrices in OutputMotionMatrixFolder
Wed Apr 10 15:44:07 EDT 2019 - MotionCorrection.sh: Run the Derive function to 
generate appropriate regressors from the par file
Wed Apr 10 15:44:30 EDT 2019 - MotionCorrection.sh: END
Wed Apr 10 15:44:30 EDT 2019 - GenericfMRIVolumeProcessingPipeline.sh: EPI 
Distortion Correction and EPI to T1w Registration
Wed Apr 10 15:44:30 EDT 2019 - GenericfMRIVolumeProcessingPipeline.sh: mkdir -p 
/Users/nib19005/Desktop/2019/TMS/BIDS/TMS_BIDS3/Bids/01/MotorLoc_AP/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased
Wed Apr 10 15:44:32 EDT 2019 - 
DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh: START
Wed Apr 10 15:44:32 EDT 2019 - TopupPreprocessingAll.sh: START: Topup Field Map 
Generation and Gradient Unwarping
Wed Apr 10 15:44:32 EDT 2019 - TopupPreprocessingAll.sh: Error: Spin echo 
fieldmap has different dimensions than scout image, this requires a manual fix

Been looking up for similar issues in the past but couldn't find any. I have 
run the PreFreeSurfer, FreeSurfer and PostFresurfer pipelines before on the 
structural images.

Been thinking about changing the field map size myself, but still wonder why 
things worked in the past and not anymore. I'm willing to share whatever bit of 
code/data might be helpful to figure out the issue, but I prefer to wait until 
you tell me what you need exactly.

Thanks!

Nicolas

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] An error about subcortical processinginHCPfMRI-surface pipeline

2019-04-10 Thread Glasser, Matthew
I agree that is very strange.  If you were to rerun fMRIVolume and fMRISurface 
do you get the same issue?  You could also try extracting the first volume and 
feeding it in as an SBRef manually to see if that fixes things.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Tuesday, April 9, 2019 at 10:38 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] An error about subcortical processinginHCPfMRI-surface 
pipeline

Hi Matt,

Sorry to bother you again.

Followed your recommend, I have checked the header of those two error image. I 
found that the space dimension of FunTask_AP.nii.gz is 91X108X91 while it is 
91X109X91 of that in ROIs.2.nii.gz. Then I had compared this parameter to our 
other dataset preprocessing by HCP pipeline and confirmed that the y dimension 
of 108 in FunTask_AP.nii.gz is a wrong number.

For now, I still have no idea about what caused this error. Could you give me a 
cue?

Sincerely,
Qunjun

-- Original ------
From:  "Glasser, Matthew"mailto:glass...@wustl.edu>>;
Date:  Sat, Apr 6, 2019 12:13 PM
To:  "Qunjun Liang"mailto:liangqun...@foxmail.com>>;
Cc:  
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"mailto:hcp-users@humanconnectome.org>>;
Subject:  Re: [HCP-Users] An error about subcortical 
processinginHCPfMRI-surface pipeline

The files in the error message.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 11:10 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] An error about subcortical processing 
inHCPfMRI-surface pipeline

Hi Matt,

Yes, we don't have an SBRef image. Because at the beginning of this experiment, 
the sequence was set referring to the UK Biobank Project. BTW, the scanner is 
Siemens 3T trio.

Sorry I don't know clearly about what files you mean... But I searched the 
MNINonLinear/ directory and found an file named {task name}_SBRef.nii.gz, so I 
used fslhd with this file. The result showed:
#---
filename 
/media/LQJ/Social_Navigation/Social_nifti/201806138_hrw/MNINonLinear/Results/FunTask_AP/FunTask_AP_SBRef.nii.gz
size of header 348
data_type FLOAT32
dim0  3
dim1  91
dim2  108
dim3  91
dim4  1
dim5  1
dim6  1
dim7  1
vox_units mm
time_units s
datatype 16
nbyper  4
bitpix  32
pixdim0  -1.00
pixdim1  2.00
pixdim2  2.00
pixdim3  2.00
pixdim4  1.20
pixdim5  0.00
pixdim6  0.00
pixdim7  0.00
vox_offset 352
cal_max  0.00
cal_min  0.00
scl_slope 1.00
scl_inter 0.00
phase_dim 0
freq_dim 0
slice_dim 0
slice_name Unknown
slice_code 0
slice_start 0
slice_end 0
slice_duration 0.00
toffset  0.00
intent  Unknown
intent_code 0
intent_name
intent_p1 0.00
intent_p2 0.00
intent_p3 0.00
qform_name MNI_152
qform_code 4
qto_xyz:1 -2.00 0.00 -0.00 90.00
qto_xyz:2 0.00 2.00 -0.00 -126.00
qto_xyz:3 0.00 0.00 2.00 -72.00
qto_xyz:4 0.00 0.00 0.00 1.00
qform_xorient Right-to-Left
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name MNI_152
sform_code 4
sto_xyz:1 -2.00 0.00 0.00 90.00
sto_xyz:2 0.00 2.00 0.00 -126.00
sto_xyz:3 0.00 0.00 2.00 -72.00
sto_xyz:4 0.00 0.00 0.00 1.00
sform_xorient Right-to-Left
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior
file_type NIFTI-1+
file_code 1
descrip  FSL5.0
aux_file
#---

If this file is the file you have mention about or I got an wrong one?

Sincerely,
Qunjun

-- Original --
From:  "Glasser, Matthew"mailto:glass...@wustl.edu>>;
Date:  Sat, Apr 6, 2019 11:32 AM
To:  "Qunjun Liang"mailto:liangqun...@foxmail.com>>;
Cc:  
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"mailto:hcp-users@humanconnectome.org>>;
Subject:  Re: [HCP-Users] An error about subcortical processing 
inHCPfMRI-surface pipeline

Do you not have an SBRef image?  I wonder if that feature is not working.

You could paste in fslhd from those two files.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 10:23 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] An error about subcortical processing in 
HCPfMRI-surface pipeline

Hi Matt,

Thank you for the prompt reply.

I have packed the batch script I used to call PreFreeSufer, PostFreeSufer, 
fMRIVolume and fMRISurface, as well as the log file (.o and .e) after running 
the pip

Re: [HCP-Users] Some questions about the HCP structural processing pipeline

2019-04-09 Thread Glasser, Matthew
How did YOU call the FreeSurfer Pipeline in your subject with an issue?

Matt.

From: Aaron C mailto:aaroncr...@outlook.com>>
Date: Tuesday, April 9, 2019 at 8:42 AM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Some questions about the HCP structural processing 
pipeline

Hi Matt,

I called the FreeSurfer pipeline using the script "FreeSurferPipelineBatch.sh" 
in the "Examples" folder. Do you mean that these brain extracted T1w and T2w 
images "T1w_acpc_dc_restore_brain.nii.gz" and 
"T2w_acpc_dc_restore_brain.nii.gz" were actually not used in the FreeSurfer 
pipeline? Thank you.

Aaron


From: Glasser, Matthew mailto:glass...@wustl.edu>>
Sent: Monday, April 8, 2019 10:50 PM
To: Aaron C; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Some questions about the HCP structural processing 
pipeline

The brain masked file should not be used for surface estimation.  How are you 
calling the FreeSurfer pipeline?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Aaron C mailto:aaroncr...@outlook.com>>
Date: Monday, April 8, 2019 at 9:41 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Some questions about the HCP structural processing pipeline

Dear HCP experts,

I have some questions about the HCP structural processing pipeline.

  1.  The brain extracted file "T1w_acpc_dc_restore_brain.nii.gz" in the T1w 
folder has excessive voxels removed in the cortical surface, which resulted in 
erroneous pial surface estimation. Is there a way that I could adjust any 
parameters of brain extraction in the PreFreeSurfer.sh file to have a larger 
brain mask?
  2.  There is another file "brainmask_fs.nii.gz". Is this the same mask 
derived from "T1w_acpc_dc_restore_brain.nii.gz" and 
"T2w_acpc_dc_restore_brain.nii.gz"?
  3.  In the structural processing QC scene, is there a way that I could also 
display the boundary of the extracted brain alongside with pial and white 
surfaces in the coronal brain?

Thank you.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users





The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Some questions about the HCP structural processing pipeline

2019-04-08 Thread Glasser, Matthew
The brain masked file should not be used for surface estimation.  How are you 
calling the FreeSurfer pipeline?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Aaron C mailto:aaroncr...@outlook.com>>
Date: Monday, April 8, 2019 at 9:41 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Some questions about the HCP structural processing pipeline

Dear HCP experts,

I have some questions about the HCP structural processing pipeline.

  1.  The brain extracted file "T1w_acpc_dc_restore_brain.nii.gz" in the T1w 
folder has excessive voxels removed in the cortical surface, which resulted in 
erroneous pial surface estimation. Is there a way that I could adjust any 
parameters of brain extraction in the PreFreeSurfer.sh file to have a larger 
brain mask?
  2.  There is another file "brainmask_fs.nii.gz". Is this the same mask 
derived from "T1w_acpc_dc_restore_brain.nii.gz" and 
"T2w_acpc_dc_restore_brain.nii.gz"?
  3.  In the structural processing QC scene, is there a way that I could also 
display the boundary of the extracted brain alongside with pial and white 
surfaces in the coronal brain?

Thank you.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing taking suspiciously long while

2019-04-08 Thread Glasser, Matthew
Thanks for letting us know.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Monday, April 8, 2019 at 6:34 AM
To: Timothy Coalson mailto:tsc...@mst.edu>>
Cc: Matt Glasser mailto:glass...@wustl.edu>>, Ido Tavor 
mailto:idota...@gmail.com>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

Hey Tim,

Sorry for the belated response.
The opm_num_threads solution worked great! It runs in a few minutes now.

Thanks,
Shachar


On Fri, Apr 5, 2019, 21:12 Timothy Coalson 
mailto:tsc...@mst.edu> wrote:
Have you had a chance to try this?  If nothing else, you can export 
OMP_NUM_THREADS=1 and see if that command finishes in less than an hour.

Tim


On Tue, Apr 2, 2019 at 2:47 AM Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
sounds reasonable
we'll try to run the process again and restrict it to one socket, and see if it 
speeds things up, and let you know

thanks!

Shachar

On Mon, 1 Apr 2019 at 23:04, Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
This finished in just over 2 minutes on my quad-core, so I am betting it is the 
multi-socket or other architecture issue:

tim@timsdev:~/Downloads$ time nice released_wb_command -cifti-resample 
rfMRI_REST_AP_temp_subject_smooth.dtseries.nii COLUMN 
rfMRI_REST_AP_temp_template.dlabel.nii COLUMN ADAP_BARY_AREA CUBIC 
test.dtseries.nii -volume-predilate 10

real 2m12.248s
user 16m42.659s
sys 0m1.365s

tim@timsdev:~/Downloads$ released_wb_command -version
Connectome Workbench
Type: Command Line Application
Version: 1.3.2
Qt Compiled Version: 5.7.0
Qt Runtime Version: 5.7.0
Commit: 1c774d37d6cb5a179e0de3cbe1081cd1f698963a
Commit Date: 2018-08-28 08:50:51 -0500
Compiled with OpenMP: YES
Compiler: g++ (/home/caret/gcc/install/gcc-4.8.5/bin)
Compiler Version: 4.8.5
Compiled Debug: NO
Operating System: Linux

Tim


On Mon, Apr 1, 2019 at 2:01 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
Sorry, that is only one physical CPU per process - you can run one instance of 
wb_command on each CPU just fine, it is simply when a single wb_command process 
has threads on more than one physical CPU that things get very slow.

Tim


On Mon, Apr 1, 2019 at 1:05 PM Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:
Does the computer you were running this on have multiple CPU sockets?  
Synchronizing code across physical sockets is quite expensive in terms of time, 
and on such systems we strongly recommend using cpusets or restricting the 
number of threads to try to use only one physical CPU.

Tim


On Mon, Apr 1, 2019 at 2:15 AM Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
well, it took 13 hours, but it completed the execution, and the final 
atlas.dtseries output looks good..
so it just takes a REALLY long time, and it takes a crazy amount of CPU
[image.png]

let me know if anything else happens when you run it I your environment?

Shachar

On Mon, 1 Apr 2019 at 00:19, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
We’ll have a look tomorrow and let you know what we find.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 4:17 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: Timothy Coalson mailto:tsc...@mst.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

It does, and I've uploaded it as well to the drive, so you could have a look 
yourself

Shachar

On Mon, Apr 1, 2019, 00:12 Glasser, Matthew 
mailto:glass...@wustl.edu> wrote:
Well in that case this confirms that what Tim was thinking is not the issue and 
that we will need to look at the files themselves.  Again, we didn’t really 
change stuff in the fMRISurface pipeline recently, so this is quite puzzling…

Does the output of fMRIVolume look okay to you?

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 4:09 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: Timothy Coalson mailto:tsc...@mst.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

sure
this specific command is identical as far as i can tell, so maybe the 
difference is more upstream than this, and has to do with the way the input 
files were constructed (whether its the volume input to this pipeline, or 
upstream inside the fMRISurface pipeline)...

#!/bin/bash
set -e
script_name="SubcorticalProcessing.sh"
echo "${script_name}: START"

AtlasSpaceFolder="$1"
echo "${script_name}: AtlasSpaceFolder: ${AtlasSpaceFolder}"

ROIFolder="$2"
echo "${script_name}: ROIFolder: ${ROIFolder}"

FinalfMRIResolution="$3"
echo "${script_name}: FinalfMRIResolution: ${FinalfMRIResolution}"

ResultsFolder="$4"
echo "${script_name}: ResultsFolder: ${ResultsFolder}"

NameOffMRI=&quo

Re: [HCP-Users] MR ICA+FIX error

2019-04-06 Thread Glasser, Matthew
I would use mode=1

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Saturday, April 6, 2019 at 2:46 PM
To: Timothy Coalson mailto:tsc...@mst.edu>>
Cc: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MR ICA+FIX error

Dear Tim,

I am getting the following error:

hcp_fix_multi_run - ABORTING: Unsupported MATLAB run mode value 
(FSL_FIX_MATLAB_MODE) in 
/usr/local/bin/gaurav_folder_new/HCP/Connectome_Project_3_22/Pipelines/ICAFIX/hcp_fix_multi_run/settings.sh:

In settings.sh, I have set up the following:

# Part III General settings
# =
# This variable selects how we run the MATLAB portions of FIX.
# It takes the values 0-2:
#   0 - Try running the compiled version of the function
#   1 - Use the MATLAB script version
#   2 - Use Octave script version
if [ -z "${FSL_FIX_MATLAB_MODE}" ]; then
FSL_FIX_MATLAB_MODE=0
fi
if [[ ${FSL_FIX_MATLAB_MODE} = 2 && -z ${FSL_FIX_OCTAVE} ]]; then
echo "ERROR in $0: Can't find Octave command"
exit 1
fi

 Not sure if this is needed but FSL_FIX_OCTAVE is disabled in my settings.sh 
since I tried to install octave via MacPorts and am getting the following 
errors:

Error: Failed to build mpich-default: command execution failed
Error: See 
/opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_science_mpich/mpich-default/main.log
 for details.
Error: Follow https://guide.macports.org/#project.tickets to report a bug.
Error: Processing of port octave failed

Thanks a lot,

Leah.



On Apr 1, 2019, at 2:58 PM, Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:

I have pushed a change (it launched the matlab part without error, which is 
where that path is used), try the latest master.

Tim


On Mon, Apr 1, 2019 at 1:49 PM Marta Moreno 
mailto:mmorenoort...@icloud.com>> wrote:
Thanks for your response, Tim.
What should I do specifically to make it work. I am not sure I am following the 
steps as written.

Thanks again,
Leah.

Sent from my iPhone

On Apr 1, 2019, at 2:40 PM, Timothy Coalson 
mailto:tsc...@mst.edu>> wrote:

Looks like mac's readlink is incapable of being easily useful for this - 
without -f, it doesn't even output anything if it isn't a symlink, while the 
point is to find the real location whether it is a symlink or not (because it 
needs other files from that directory).

I guess I will make it test whether "$0" is a symlink first.

Tim


On Mon, Apr 1, 2019 at 9:31 AM Harms, Michael 
mailto:mha...@wustl.edu>> wrote:
Tim will have to comment on the ‘readlink -f’ issue, since I think he 
introduced that particular syntax.

Please keep posts cc’ed to the HCP-User list, so that we can archive the 
discussion, and other users can benefit from it.

--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: Marta Moreno mailto:mmorenoort...@icloud.com>>
Date: Monday, April 1, 2019 at 9:21 AM
To: "Harms, Michael" mailto:mha...@wustl.edu>>
Subject: Re: [HCP-Users] ICA+FIX error

Thanks for your response, Michael. The issue with single ICA+ FIX is solved 
after running fMRISurface again.

The problem is now with MR ICA+FIX. I am copying the error below for your 
convenience. Matt said is probably related to the fact that I am using a MAC.

bash-3.2$ hcp_fix_multi_run 
RS_fMRI_1/RS_fMRI_1.nii.gz@RS_fMRI_2/RS_fMRI_2.nii.gz
 RS_fMRI_MR 2000 TRUE
Sat Mar 30 16:45:07 EDT 2019 - hcp_fix_multi_run - HCPPIPEDIR: 
/usr/local/bin/HCP/Pipelines
Sat Mar 30 16:45:07 EDT 2019 - hcp_fix_multi_run - CARET7DIR: 
/Applications/workbench/bin_macosx64/
Sat Mar 30 16:45:07 EDT 2019 - hcp_fix_multi_run - FSLDIR: /usr/local/fsl
Sat Mar 30 16:45:07 EDT 2019 - hcp_fix_multi_run - FSL_FIXDIR: 
/usr/local/bin/HCP/Pipelines/ICAFIX/hcp_fix_multi_run
readlink: illegal option -- f
usage: readlink [-n] [file …]

Thanks!,
Leah.

On Apr 1, 2019, at 9:29 AM, Harms, Michael 
mailto:mha...@wustl.edu>> wrote:

Check RS_fMRI_2_hp2000.ica/.fix.log for clues.  Also, what’s in the stderr 
output from the run?


--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Sunday, March 31, 2019 at 6:44 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>

Re: [HCP-Users] An error about subcortical processing inHCPfMRI-surface pipeline

2019-04-05 Thread Glasser, Matthew
The files in the error message.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 11:10 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] An error about subcortical processing 
inHCPfMRI-surface pipeline

Hi Matt,

Yes, we don't have an SBRef image. Because at the beginning of this experiment, 
the sequence was set referring to the UK Biobank Project. BTW, the scanner is 
Siemens 3T trio.

Sorry I don't know clearly about what files you mean... But I searched the 
MNINonLinear/ directory and found an file named {task name}_SBRef.nii.gz, so I 
used fslhd with this file. The result showed:
#---
filename 
/media/LQJ/Social_Navigation/Social_nifti/201806138_hrw/MNINonLinear/Results/FunTask_AP/FunTask_AP_SBRef.nii.gz
size of header 348
data_type FLOAT32
dim0  3
dim1  91
dim2  108
dim3  91
dim4  1
dim5  1
dim6  1
dim7  1
vox_units mm
time_units s
datatype 16
nbyper  4
bitpix  32
pixdim0  -1.00
pixdim1  2.00
pixdim2  2.00
pixdim3  2.00
pixdim4  1.20
pixdim5  0.00
pixdim6  0.00
pixdim7  0.00
vox_offset 352
cal_max  0.00
cal_min  0.00
scl_slope 1.00
scl_inter 0.00
phase_dim 0
freq_dim 0
slice_dim 0
slice_name Unknown
slice_code 0
slice_start 0
slice_end 0
slice_duration 0.00
toffset  0.00
intent  Unknown
intent_code 0
intent_name
intent_p1 0.00
intent_p2 0.00
intent_p3 0.00
qform_name MNI_152
qform_code 4
qto_xyz:1 -2.00 0.00 -0.00 90.00
qto_xyz:2 0.00 2.00 -0.00 -126.00
qto_xyz:3 0.00 0.00 2.00 -72.00
qto_xyz:4 0.00 0.00 0.00 1.00
qform_xorient Right-to-Left
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name MNI_152
sform_code 4
sto_xyz:1 -2.00 0.00 0.00 90.00
sto_xyz:2 0.00 2.00 0.00 -126.00
sto_xyz:3 0.00 0.00 2.00 -72.00
sto_xyz:4 0.00 0.00 0.00 1.00
sform_xorient Right-to-Left
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior
file_type NIFTI-1+
file_code 1
descrip  FSL5.0
aux_file
#---

If this file is the file you have mention about or I got an wrong one?

Sincerely,
Qunjun

-- Original ------
From:  "Glasser, Matthew"mailto:glass...@wustl.edu>>;
Date:  Sat, Apr 6, 2019 11:32 AM
To:  "Qunjun Liang"mailto:liangqun...@foxmail.com>>;
Cc:  
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"mailto:hcp-users@humanconnectome.org>>;
Subject:  Re: [HCP-Users] An error about subcortical processing 
inHCPfMRI-surface pipeline

Do you not have an SBRef image?  I wonder if that feature is not working.

You could paste in fslhd from those two files.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 10:23 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] An error about subcortical processing in 
HCPfMRI-surface pipeline

Hi Matt,

Thank you for the prompt reply.

I have packed the batch script I used to call PreFreeSufer, PostFreeSufer, 
fMRIVolume and fMRISurface, as well as the log file (.o and .e) after running 
the pipelines. The zip file is placed in the attachment.

Sincerely,
Qunjun

-- Original --
From:  "Glasser, Matthew"mailto:glass...@wustl.edu>>;
Date:  Sat, Apr 6, 2019 10:39 AM
To:  "Qunjun 
Liang"mailto:liangqun...@foxmail.com>>;"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"mailto:hcp-users@humanconnectome.org>>;
Subject:  Re: [HCP-Users] An error about subcortical processing in 
HCPfMRI-surface pipeline

Please post how you called PreFreeSurfer, PostFreeSurfer, fMRIVolume, and 
fMRISurface.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Qunjun Liang 
mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 9:37 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] An error about subcortical processing in HCP fMRI-surface 
pipeline

Dear HCP experts,

An error was occurred when I used fMRI surface generation pipeline:
#--
While running:
wb_command -cifti-create-dense-timeseries /{PATH to 
subject}/MNINonLinear/Results/{task name}/{task name}_temp_subject.dtseries.nii 
-volume /{PATH to subject}/MNINonLinear/Results/{task name}/{task name}.nii.gz
/{PATH to subject}/MNINonLinear/ROIs/ROIs.2.nii.gz

ERROR: label volume has a different volume space than data volume
#--

I used GenericfMRISurfaceProcessingPipelineBatch.sh in E

Re: [HCP-Users] An error about subcortical processing in HCPfMRI-surface pipeline

2019-04-05 Thread Glasser, Matthew
Do you not have an SBRef image?  I wonder if that feature is not working.

You could paste in fslhd from those two files.

Matt.

From: Qunjun Liang mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 10:23 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] An error about subcortical processing in 
HCPfMRI-surface pipeline

Hi Matt,

Thank you for the prompt reply.

I have packed the batch script I used to call PreFreeSufer, PostFreeSufer, 
fMRIVolume and fMRISurface, as well as the log file (.o and .e) after running 
the pipelines. The zip file is placed in the attachment.

Sincerely,
Qunjun

-- Original ------
From:  "Glasser, Matthew"mailto:glass...@wustl.edu>>;
Date:  Sat, Apr 6, 2019 10:39 AM
To:  "Qunjun 
Liang"mailto:liangqun...@foxmail.com>>;"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"mailto:hcp-users@humanconnectome.org>>;
Subject:  Re: [HCP-Users] An error about subcortical processing in 
HCPfMRI-surface pipeline

Please post how you called PreFreeSurfer, PostFreeSurfer, fMRIVolume, and 
fMRISurface.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Qunjun Liang 
mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 9:37 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] An error about subcortical processing in HCP fMRI-surface 
pipeline

Dear HCP experts,

An error was occurred when I used fMRI surface generation pipeline:
#--
While running:
wb_command -cifti-create-dense-timeseries /{PATH to 
subject}/MNINonLinear/Results/{task name}/{task name}_temp_subject.dtseries.nii 
-volume /{PATH to subject}/MNINonLinear/Results/{task name}/{task name}.nii.gz
/{PATH to subject}/MNINonLinear/ROIs/ROIs.2.nii.gz

ERROR: label volume has a different volume space than data volume
#--

I used GenericfMRISurfaceProcessingPipelineBatch.sh in Example/ directory to 
call the pipeline. The parameters (LowResMesh, FinalfMRIResolution and 
GrayordinatesResolution) were set in accord with that inPostFreesurferPipeline 
and fMRIVolume pipeline.

Given than my fMRI data were acquired by multiband sequence, so I wonder if the 
pipeline was expected to do some extra modifications to fit multiband image?

Environment:
1. Ubuntu 14.04 LTS
2. HCP pipeline 4.0.0
3. Workbench 1.3.2
4. FreeSurfer 6.0
5. FSL 5.0.9

Sincerely,
--
Qunjun Liang, Ph.D.
School of psychology
South China Normal University
Zhongshan Avenue West 55, Tianhe District
Guangzhou 510631
P. R. China


___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] An error about subcortical processing in HCP fMRI-surface pipeline

2019-04-05 Thread Glasser, Matthew
Please post how you called PreFreeSurfer, PostFreeSurfer, fMRIVolume, and 
fMRISurface.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Qunjun Liang 
mailto:liangqun...@foxmail.com>>
Date: Friday, April 5, 2019 at 9:37 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] An error about subcortical processing in HCP fMRI-surface 
pipeline

Dear HCP experts,

An error was occurred when I used fMRI surface generation pipeline:
#--
While running:
wb_command -cifti-create-dense-timeseries /{PATH to 
subject}/MNINonLinear/Results/{task name}/{task name}_temp_subject.dtseries.nii 
-volume /{PATH to subject}/MNINonLinear/Results/{task name}/{task name}.nii.gz
/{PATH to subject}/MNINonLinear/ROIs/ROIs.2.nii.gz

ERROR: label volume has a different volume space than data volume
#--

I used GenericfMRISurfaceProcessingPipelineBatch.sh in Example/ directory to 
call the pipeline. The parameters (LowResMesh, FinalfMRIResolution and 
GrayordinatesResolution) were set in accord with that inPostFreesurferPipeline 
and fMRIVolume pipeline.

Given than my fMRI data were acquired by multiband sequence, so I wonder if the 
pipeline was expected to do some extra modifications to fit multiband image?

Environment:
1. Ubuntu 14.04 LTS
2. HCP pipeline 4.0.0
3. Workbench 1.3.2
4. FreeSurfer 6.0
5. FSL 5.0.9

Sincerely,
--
Qunjun Liang, Ph.D.
School of psychology
South China Normal University
Zhongshan Avenue West 55, Tianhe District
Guangzhou 510631
P. R. China


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

2019-04-05 Thread Glasser, Matthew
Maybe look on the FreeSurfer list for a solution to this?  MNI tools is 
something that FreeSurfer is including and perhaps it is very sensitive to the 
OS version you use.  Could see if going back to Ubuntu 16.04 gets rid of the 
error.

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Friday, April 5, 2019 at 5:20 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Ubuntu 14.04


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Friday, April 5, 2019 5:19:49 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

What OS are you on?

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Friday, April 5, 2019 at 5:18 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Interestingly I had set the path to Freesurfer v6, but the script was calling 
on Freesurfer 5.3.0-HCP. After deleting the Freesurfer 5.3.0-HCP folder, the 
attached log file showed that the script was using the updated version of 
Freesurfer.


However, as you can see, the log file still reported the same error about 
nu_correct crashing with the change above in place.


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Thursday, April 4, 2019 6:05:48 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

In your log file it says you are using 5.3.0-HCP.

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 4:05 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Yes I am


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Thursday, April 4, 2019 3:01:18 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

Are you using FreeSurfer 6.0?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 11:25 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] nu_correct: crashed while running nu_evaluate


Dear all,


I have been running the FreeSurferPipelineBatch script (v4 of the HCP pipeline) 
and I'm getting an error saying that nu_correct: crashed while running 
nu_evaluate. I didn't encounter this same issue with the previous version of 
the pipeline script. I've attached the recon-all.log and orig_nu.log files. Has 
anyone encountered a similar issue?


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.

Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

2019-04-05 Thread Glasser, Matthew
What OS are you on?

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Friday, April 5, 2019 at 5:18 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Interestingly I had set the path to Freesurfer v6, but the script was calling 
on Freesurfer 5.3.0-HCP. After deleting the Freesurfer 5.3.0-HCP folder, the 
attached log file showed that the script was using the updated version of 
Freesurfer.


However, as you can see, the log file still reported the same error about 
nu_correct crashing with the change above in place.


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Thursday, April 4, 2019 6:05:48 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

In your log file it says you are using 5.3.0-HCP.

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 4:05 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Yes I am


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Thursday, April 4, 2019 3:01:18 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

Are you using FreeSurfer 6.0?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 11:25 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] nu_correct: crashed while running nu_evaluate


Dear all,


I have been running the FreeSurferPipelineBatch script (v4 of the HCP pipeline) 
and I'm getting an error saying that nu_correct: crashed while running 
nu_evaluate. I didn't encounter this same issue with the previous version of 
the pipeline script. I've attached the recon-all.log and orig_nu.log files. Has 
anyone encountered a similar issue?


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Volumetric subcortical group-averaged data: what is the exact MNI template you used?

2019-04-05 Thread Glasser, Matthew
I don’t know, you can ask on the FSL list.

Matt.

From: Xavier Guell Paradis 
mailto:xavierguellpara...@gmail.com>>
Date: Friday, April 5, 2019 at 4:49 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: Xavier Guell Paradis mailto:xavie...@mit.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Volumetric subcortical group-averaged data: what is 
the exact MNI template you used?

This means that the template is asymmetric, not symmetric, correct?
Thanks,
Xavier.

On Fri, Apr 5, 2019 at 5:48 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
FSL’s MNI152.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, April 5, 2019 at 4:46 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Volumetric subcortical group-averaged data: what is the 
exact MNI template you used?

Dear HCP experts,
I am interested in analyzing your group-averaged subcortical volumetric data. 
My understanding is that your volumetric data is registered to MNI space. I was 
wondering if you could let me know what specific MNI template you used. I am 
especially interested in knowing whether it is a symmetric or an asymmetric MNI 
template.
Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Volumetric subcortical group-averaged data: what is the exact MNI template you used?

2019-04-05 Thread Glasser, Matthew
FSL’s MNI152.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, April 5, 2019 at 4:46 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Volumetric subcortical group-averaged data: what is the 
exact MNI template you used?

Dear HCP experts,
I am interested in analyzing your group-averaged subcortical volumetric data. 
My understanding is that your volumetric data is registered to MNI space. I was 
wondering if you could let me know what specific MNI template you used. I am 
especially interested in knowing whether it is a symmetric or an asymmetric MNI 
template.
Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Standard to Native Applywarp Error

2019-04-04 Thread Glasser, Matthew
I would recommend using probtrackx2 and tracking directly to the surface and 
into the CIFTI space, rather than trying to go back to the volume.

Matt.

From: Haroon Popal mailto:tuk12...@temple.edu>>
Date: Thursday, April 4, 2019 at 8:49 PM
To: Matt Glasser mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Standard to Native Applywarp Error

Thanks for getting back to me.

This is HCP data. Specifically, the social task. I am using FSL (probtrackx) 
for tractography. I used the cifti-separate command and the metric-resample 
command to extract the surface data. But I'm interested in doing a whole-brain 
tractography with subcortical regions as well. So I figured I could resample 
the surfaces to the fsaverage brain and then combine them with the subcortical 
with vlrmerge. Do you have any other recommendations on how I can get the 
output from cifti-separate into native volume space?

Here are the commands I used:

# 1) Generate single-hemisphere gifti and volume nifti from cifti file
wb_command -cifti-separate-all ${contrast_dir}/zstat1.dtseries.nii \
   -volume ${contrast_dir}/zstat.nii.gz \
   -left ${contrast_dir}/zstat.L.func.gii  \
   -right ${contrast_dir}/zstat.R.func.gii

# 2) Map scalar data to fsaverage
for hemi in L R; do
wb_command -metric-resample \
${contrast_dir}/zstat.${hemi}.func.gii \

standard_mesh_atlases/resample_fsaverage/fs_LR-deformed_to-fsaverage.${hemi}.sphere.32k_fs_LR.surf.gii
 \

standard_mesh_atlases/resample_fsaverage/fsaverage_std_sphere.${hemi}.164k_fsavg_${hemi}.surf.gii
 ADAP_BARY_AREA \
${contrast_dir}/${hemi}.164k_fsavg_${hemi}.func.gii \
-area-metrics 
standard_mesh_atlases/resample_fsaverage/fs_LR.${hemi}.midthickness_va_avg.32k_fs_LR.shape.gii
 \

standard_mesh_atlases/resample_fsaverage/fsaverage.${hemi}.midthickness_va_avg.164k_fsavg_${hemi}.shape.gii
done

# Register subcortical data to MNI305
mri_vol2vol --mov ${contrast_dir}/zstat.nii.gz \
--targ 
/usr/local/freesurfer/subjects/fsaverage/mri.2mm/mni305.cor.mgz \
--xfm $SUBJECTS_DIR/${SUB}/mri/transforms/talairach.xfm  \
--o ${contrast_dir}/zstat.mni305.2mm.nii.gz

# Merge surface data with subcortical into one combined volume
vlrmerge --o ${contrast_dir}/zstat_combined.nii.gz \
--lh ${contrast_dir}/L.164k_fsavg_L.func.gii \
--rh ${contrast_dir}/R.164k_fsavg_R.func.gii \
--v ${contrast_dir}/zstat.mni305.2mm.nii.gz \
--scm /usr/local/freesurfer/subjects/fsaverage/mri.2mm/subcort.mask.mgz


On Thu, Apr 4, 2019 at 9:32 PM Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
Is this HCP data?  What software are you using for tractography?  If you want 
statistical maps in native volume space, it might be easiest to project them 
from the subject’s physical space surfaces in the T1w/fsaverage_LR32k or 
alternatively track the data to those surfaces.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Haroon Popal mailto:tuk12...@temple.edu>>
Date: Wednesday, April 3, 2019 at 5:15 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Standard to Native Applywarp Error

Hello,

I am trying to use the provided commands in the link below to transform a 
contrast from the social task to native space.
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-16.CanIgetthefMRIdatainasubject's'nativespace'?

I used the pipeline in the second link to extract the surface and subcortical 
data from the original zstat.dtseries.nii file in one of the cope directories. 
I then used vlrmerge from FreeSurfer to combine the surface and volume data.
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-9.HowdoImapdatabetweenFreeSurferandHCP?

When I run the applywarp command, I'm getting a weird registration, where the 
brain is tilted up in the sagittal view, with the frontal lobe of the 
functional data pointing upwards.

I am trying to do all of these because I need a volume for the contrast map, to 
compare to some DTI. I need my functional data and DTI data in the same space.

Thanks for your help.

--
Haroon S. Popal
Ph.D. Student
Cognitive Neuroscience Laboratory
Department of Psychology | Temple University

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient,

Re: [HCP-Users] Standard to Native Applywarp Error

2019-04-04 Thread Glasser, Matthew
Is this HCP data?  What software are you using for tractography?  If you want 
statistical maps in native volume space, it might be easiest to project them 
from the subject’s physical space surfaces in the T1w/fsaverage_LR32k or 
alternatively track the data to those surfaces.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Haroon Popal mailto:tuk12...@temple.edu>>
Date: Wednesday, April 3, 2019 at 5:15 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Standard to Native Applywarp Error

Hello,

I am trying to use the provided commands in the link below to transform a 
contrast from the social task to native space.
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-16.CanIgetthefMRIdatainasubject's'nativespace'?

I used the pipeline in the second link to extract the surface and subcortical 
data from the original zstat.dtseries.nii file in one of the cope directories. 
I then used vlrmerge from FreeSurfer to combine the surface and volume data.
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-9.HowdoImapdatabetweenFreeSurferandHCP?

When I run the applywarp command, I'm getting a weird registration, where the 
brain is tilted up in the sagittal view, with the frontal lobe of the 
functional data pointing upwards.

I am trying to do all of these because I need a volume for the contrast map, to 
compare to some DTI. I need my functional data and DTI data in the same space.

Thanks for your help.

--
Haroon S. Popal
Ph.D. Student
Cognitive Neuroscience Laboratory
Department of Psychology | Temple University

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MSM binaries versions

2019-04-04 Thread Glasser, Matthew
V2 is okay too.

Matt.

On 4/4/19, 6:18 PM, "Moataz Assem"  wrote:

>Thanks all for your replies. From what I understood from Emma's comments
>it looks like v2 from this link
>(https://www.doc.ic.ac.uk/~ecr05/MSM_HOCR_v2/) is very similar to the
>versions on github.
>
>Since I have already ran MSMSulc (PostFreeSurfer and fMRISurface
>HCPPipelines v3.27.0) using the v2 binaries (from the link above not the
>fsl one), is it okay to stick with it for MSMAll from HCPpipelines
>v4.0.0? Or will I need to download the new binary and rerun MSMSulc with
>it?
>
>and just for future reference, by the msm binary, do you mean to just
>download e.g. msm_centos from github? In that case its only compiled for
>v1.0.0 and not v3.0.0, am I right?
>
>Thanks
>
>Moataz
>
>From: Harms, Michael [mha...@wustl.edu]
>Sent: 04 April 2019 21:56
>To: Glasser, Matthew; NEUROSCIENCE tim; Moataz Assem
>Cc: hcp-users@humanconnectome.org
>Subject: Re: [HCP-Users] MSM binaries versions
>
>Also, see here
>https://github.com/ecr05/MSM_HOCR/issues/5
>for some info from Emma.
>
>
>--
>Michael Harms, Ph.D.
>---
>Associate Professor of Psychiatry
>Washington University School of Medicine
>Department of Psychiatry, Box 8134
>660 South Euclid Ave.    Tel: 314-747-6173
>St. Louis, MO  63110  Email: mha...@wustl.edu
>
>From:  on behalf of "Glasser,
>Matthew" 
>Date: Thursday, April 4, 2019 at 3:03 PM
>To: NEUROSCIENCE tim , Moataz Assem
>
>Cc: "hcp-users@humanconnectome.org" 
>Subject: Re: [HCP-Users] MSM binaries versions
>
>Use version 3.0.0 in GitHub.  The one used in FSL is not supported by the
>HCP Pipelines as it does not have the HOCR options.  All that is needed
>is the MSM binary.
>
>Matt.
>
>From:
>mailto:hcp-users-bounces@humanconne
>ctome.org>> on behalf of Timothy Coalson
>mailto:tsc...@mst.edu>>
>Date: Thursday, April 4, 2019 at 2:15 PM
>To: Moataz Assem
>mailto:moataz.as...@mrc-cbu.cam.ac.uk>>
>Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>"
>mailto:hcp-users@humanconnectome.org>>
>Subject: Re: [HCP-Users] MSM binaries versions
>
>The 1.0 and 3.0 versions on github are nearly identical, that was just a
>naming issue.
>
>The version in FSL may be based on version 2, and is missing a library
>needed for HOCR, so some options in v3 aren't available.  You should be
>able to use the fsl versions of the executables other than msm (so,
>msmresample, etc) with any version of msm.
>
>I'm not sure about your other questions.
>
>Tim
>
>
>On Thu, Apr 4, 2019 at 1:29 PM Moataz Assem
>mailto:moataz.as...@mrc-cbu.cam.ac.uk>>
>wrote:
>Hi,
>
>What is the recommended version of the MSM binaries to use? The rep
>directory (https://github.com/ecr05/MSM_HOCR/releases) has v1.0.0 and
>v3.0.0 and I was previously using v2 from here:
>https://www.doc.ic.ac.uk/~ecr05/MSM_HOCR_v2/
>Also what is the difference between these binaries and the ones
>downloaded with fsl? In otherwords, can I just point the MSMBINDIR in the
>SetUpHCPPipeline.sh to  ~/fsl/bin/ since it contains all the msm related
>functions?
>
>Also I would appreciate a clarification on the list of compiled files to
>make sure exist in the directory pointed to for the MSMBINDIR variable.
>
>Thanks
>
>Moataz
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>The materials in this message are private and may contain Protected
>Healthcare Information or other information of a sensitive nature. If you
>are not the intended recipient, be advised that any unauthorized use,
>disclosure, copying or the taking of any action in reliance on the
>contents of this information is strictly prohibited. If you have received
>this email in error, please immediately notify the sender via telephone
>or return mail.
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>The materials in this message are private and may contain Protected
>Healthcare Information or ot

Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

2019-04-04 Thread Glasser, Matthew
In your log file it says you are using 5.3.0-HCP.

Matt.

From: "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 4:05 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate


Yes I am


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>


From: Glasser, Matthew
Sent: Thursday, April 4, 2019 3:01:18 PM
To: Jayasekera, Dinal; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

Are you using FreeSurfer 6.0?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 11:25 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] nu_correct: crashed while running nu_evaluate


Dear all,


I have been running the FreeSurferPipelineBatch script (v4 of the HCP pipeline) 
and I'm getting an error saying that nu_correct: crashed while running 
nu_evaluate. I didn't encounter this same issue with the previous version of 
the pipeline script. I've attached the recon-all.log and orig_nu.log files. Has 
anyone encountered a similar issue?


Kind regards,
Dinal Jayasekera<https://dinaljay.weebly.com/>

PhD Candidate | InSITE Fellow<https://www.insitefellows.org/>
Ammar Hawasli Lab<https://hawaslilab.weebly.com/>
Department of Biomedical Engineering<https://bme.wustl.edu/Pages/default.aspx> 
| Washington University in St. Louis<https://wustl.edu/>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MSM binaries versions

2019-04-04 Thread Glasser, Matthew
Use version 3.0.0 in GitHub.  The one used in FSL is not supported by the HCP 
Pipelines as it does not have the HOCR options.  All that is needed is the MSM 
binary.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Thursday, April 4, 2019 at 2:15 PM
To: Moataz Assem 
mailto:moataz.as...@mrc-cbu.cam.ac.uk>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] MSM binaries versions

The 1.0 and 3.0 versions on github are nearly identical, that was just a naming 
issue.

The version in FSL may be based on version 2, and is missing a library needed 
for HOCR, so some options in v3 aren't available.  You should be able to use 
the fsl versions of the executables other than msm (so, msmresample, etc) with 
any version of msm.

I'm not sure about your other questions.

Tim


On Thu, Apr 4, 2019 at 1:29 PM Moataz Assem 
mailto:moataz.as...@mrc-cbu.cam.ac.uk>> wrote:
Hi,

What is the recommended version of the MSM binaries to use? The rep directory 
(https://github.com/ecr05/MSM_HOCR/releases) has v1.0.0 and v3.0.0 and I was 
previously using v2 from here: https://www.doc.ic.ac.uk/~ecr05/MSM_HOCR_v2/
Also what is the difference between these binaries and the ones downloaded with 
fsl? In otherwords, can I just point the MSMBINDIR in the SetUpHCPPipeline.sh 
to  ~/fsl/bin/ since it contains all the msm related functions?

Also I would appreciate a clarification on the list of compiled files to make 
sure exist in the directory pointed to for the MSMBINDIR variable.

Thanks

Moataz

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] nu_correct: crashed while running nu_evaluate

2019-04-04 Thread Glasser, Matthew
Are you using FreeSurfer 6.0?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Thursday, April 4, 2019 at 11:25 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] nu_correct: crashed while running nu_evaluate


Dear all,


I have been running the FreeSurferPipelineBatch script (v4 of the HCP pipeline) 
and I'm getting an error saying that nu_correct: crashed while running 
nu_evaluate. I didn't encounter this same issue with the previous version of 
the pipeline script. I've attached the recon-all.log and orig_nu.log files. Has 
anyone encountered a similar issue?


Kind regards,
Dinal Jayasekera

PhD Candidate | InSITE Fellow
Ammar Hawasli Lab
Department of Biomedical Engineering 
| Washington University in St. Louis

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] A question about the HCP pipeline

2019-04-04 Thread Glasser, Matthew
You could interpolate it back to native space if you wanted with relatively 
little information loss if you use splines.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Thursday, April 4, 2019 at 1:53 PM
To: Aaron C mailto:aaroncr...@outlook.com>>
Cc: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] A question about the HCP pipeline

No, since the subcortical data needed to be in MNI space, we chose to use MNI 
space surfaces for each subject so that we only needed to generate a single 
motion-corrected volume timeseries.  Because the per-subject processing uses 
the individual surfaces and the same warpfield for surface and volume, 
everything lines up just as well as in native space - the main difference is 
that the warpfield can locally change the sampling density in the volume data.

Tim


On Thu, Apr 4, 2019 at 7:57 AM Aaron C 
mailto:aaroncr...@outlook.com>> wrote:
Dear HCP experts,

For the HCP pipeline, is there a version that processes the data in native 
space? Thank you.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FSL 6 compatibility

2019-04-02 Thread Glasser, Matthew
Please use FSL 6.0.1.

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of "Jayasekera, Dinal" 
mailto:dinal.jayasek...@wustl.edu>>
Date: Tuesday, April 2, 2019 at 6:10 PM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] FSL 6 compatibility


Dear all,


Is version 4 of the HCP Pipelines functional with fsl version 6.0?


Kind regards,
Dinal Jayasekera

PhD Candidate | InSITE Fellow
Ammar Hawasli Lab
Department of Biomedical Engineering 
| Washington University in St. Louis

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] ICA+FIX error

2019-03-31 Thread Glasser, Matthew
Did you update to the latest version of FIX from FSL’s website?

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Marta Moreno 
mailto:mmorenoort...@icloud.com>>
Date: Sunday, March 31, 2019 at 6:44 PM
To: HCP Users 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] ICA+FIX error

Dear Experts,

I am running hip_fix and getting the following error with just one of my 
subjects, please see below. It seams the script stoped at the end, in section 
"Rename some files (relative to the default names coded in fix_3_clean)”, first 
line: "$FSLDIR/bin/immv ${fmrihp}.ica/filtered_func_data_clean 
${fmrihp}_clean”, which could not find a supported file with prefix 
".ica/filtered_func_data_clean”

(…)
Sun Mar 31 19:18:12 EDT 2019 - hcp_fix - Done running FIX
Sun Mar 31 19:18:13 EDT 2019 - hcp_fix - ABORTING: Something went wrong;  
RS_fMRI_2_hp2000.ica/Atlas_clean.dtseries.nii wasn't created

How can I solve this problem?

Thanks,

Leah.



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing taking suspiciously long while

2019-03-31 Thread Glasser, Matthew
I’d like Tim’s input on where to upload if you don’t have your own convenient 
spot.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:58 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

yes, it happened for each of my subjects.
these are the input files for the command -  temp_subject_smooth.dtseries.nii, 
temp_template.dlabel.nii
 where should i upload them to?


On Sun, 31 Mar 2019 at 18:56, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
We need the input files to the command that is taking too long.  Also, does the 
problem happen reproducibly?

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:53 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

so ive connected to our server and visually examined the rfMRI_REST_AP.nii.gz 
file, the output from the fMRIVolume pipeline, and it looks great.

where would you like me to upload the files, and which files exactly, other 
than temp_subject_smooth.dtseries.nii, temp_template.dlabel.nii and the 
registered volume?

thanks,
Shachar

On Sun, 31 Mar 2019 at 18:37, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
Perhaps you should upload the various input files to the specific wb_command 
command somewhere so we can test them if this issue is reproducible (i.e. is 
not a random thing from bad RAM or something like that).

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:32 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

Its 500 time points.
I'll have a look at the volume input tommorow when I get back to the office. I 
ran them serially, the fmri pipelines..

Cheers
Shachar

On Sun, Mar 31, 2019, 18:13 Glasser, Matthew 
mailto:glass...@wustl.edu> wrote:
How many volumes and did you look at the data being input yet?

This code didn’t change so we don’t expect any difference.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:10 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

hey matt,
this is the call to the pipeline:

/root/hcppilelines/HCPpipelines-master/fMRISurface/GenericfMRISurfaceProcessingPipeline.sh
 --path=/root/hcppilelines/piano_hcp/working_directory --subject=101 
--fmriname=rfMRI_REST_AP --lowresmesh=32 --fmrires=2 --smoothingFWHM=4 
--grayordinatesres=2 --regname=MSMSulc

the TR is 0.75, and the resolution is 2 mm isometric.
as i wrote, this went smoothly with the previous release of the pipelines, and 
resulted in data that looked great

thanks,
Shachar




On Sun, 31 Mar 2019 at 17:53, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
How did you call fMRISurface and what kind of data is this in terms of spatial 
and temporal resolution?  Do the input and template data line up and look 
reasonable?

Matt

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 8:27 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

some further information that might be useful - the output file 
rfMRI_REST_AP_temp_atlas.dtseries.nii, has already been created. its in the 
target folder. but the command won't stop "running"

On Sun, 31 Mar 2019 at 16:13, Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
Dear experts,

while running the fMRISurface pipeline (using the latest pipeline release) I 
encountered an issue where the subcortical-processing script takes a really 
long while (now running for more than 6 hours).
this is the command on which it is stuck

${CARET7DIR}/wb_command -cifti-resample 
${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii COLUMN 
${ResultsFolder}/${NameOffMRI}_temp_template.dlabel.nii COLUMN ADAP_BARY_AREA 
CUBIC ${ResultsFolder}/${NameOffMRI}_temp_atlas.dtseries.nii -volume-predilate 
10
rm -f ${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii

the wb_command version is 1.3.2
there is no error being raise or anything. its still running, but its ages...
the thing is, when running the older version of the pipeline (but the same 
workbench version), there was no problem.
any ideas?

ill report later on if it completes the command eventually

thanks,
Shachar



___
HCP-

Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing taking suspiciously long while

2019-03-31 Thread Glasser, Matthew
We need the input files to the command that is taking too long.  Also, does the 
problem happen reproducibly?

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:53 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

so ive connected to our server and visually examined the rfMRI_REST_AP.nii.gz 
file, the output from the fMRIVolume pipeline, and it looks great.

where would you like me to upload the files, and which files exactly, other 
than temp_subject_smooth.dtseries.nii, temp_template.dlabel.nii and the 
registered volume?

thanks,
Shachar

On Sun, 31 Mar 2019 at 18:37, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
Perhaps you should upload the various input files to the specific wb_command 
command somewhere so we can test them if this issue is reproducible (i.e. is 
not a random thing from bad RAM or something like that).

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:32 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

Its 500 time points.
I'll have a look at the volume input tommorow when I get back to the office. I 
ran them serially, the fmri pipelines..

Cheers
Shachar

On Sun, Mar 31, 2019, 18:13 Glasser, Matthew 
mailto:glass...@wustl.edu> wrote:
How many volumes and did you look at the data being input yet?

This code didn’t change so we don’t expect any difference.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:10 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

hey matt,
this is the call to the pipeline:

/root/hcppilelines/HCPpipelines-master/fMRISurface/GenericfMRISurfaceProcessingPipeline.sh
 --path=/root/hcppilelines/piano_hcp/working_directory --subject=101 
--fmriname=rfMRI_REST_AP --lowresmesh=32 --fmrires=2 --smoothingFWHM=4 
--grayordinatesres=2 --regname=MSMSulc

the TR is 0.75, and the resolution is 2 mm isometric.
as i wrote, this went smoothly with the previous release of the pipelines, and 
resulted in data that looked great

thanks,
Shachar




On Sun, 31 Mar 2019 at 17:53, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
How did you call fMRISurface and what kind of data is this in terms of spatial 
and temporal resolution?  Do the input and template data line up and look 
reasonable?

Matt

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 8:27 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

some further information that might be useful - the output file 
rfMRI_REST_AP_temp_atlas.dtseries.nii, has already been created. its in the 
target folder. but the command won't stop "running"

On Sun, 31 Mar 2019 at 16:13, Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
Dear experts,

while running the fMRISurface pipeline (using the latest pipeline release) I 
encountered an issue where the subcortical-processing script takes a really 
long while (now running for more than 6 hours).
this is the command on which it is stuck

${CARET7DIR}/wb_command -cifti-resample 
${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii COLUMN 
${ResultsFolder}/${NameOffMRI}_temp_template.dlabel.nii COLUMN ADAP_BARY_AREA 
CUBIC ${ResultsFolder}/${NameOffMRI}_temp_atlas.dtseries.nii -volume-predilate 
10
rm -f ${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii

the wb_command version is 1.3.2
there is no error being raise or anything. its still running, but its ages...
the thing is, when running the older version of the pipeline (but the same 
workbench version), there was no problem.
any ideas?

ill report later on if it completes the command eventually

thanks,
Shachar



___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone 

Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing taking suspiciously long while

2019-03-31 Thread Glasser, Matthew
Perhaps you should upload the various input files to the specific wb_command 
command somewhere so we can test them if this issue is reproducible (i.e. is 
not a random thing from bad RAM or something like that).

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:32 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

Its 500 time points.
I'll have a look at the volume input tommorow when I get back to the office. I 
ran them serially, the fmri pipelines..

Cheers
Shachar

On Sun, Mar 31, 2019, 18:13 Glasser, Matthew 
mailto:glass...@wustl.edu> wrote:
How many volumes and did you look at the data being input yet?

This code didn’t change so we don’t expect any difference.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 10:10 AM
To: Matt Glasser mailto:glass...@wustl.edu>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

hey matt,
this is the call to the pipeline:

/root/hcppilelines/HCPpipelines-master/fMRISurface/GenericfMRISurfaceProcessingPipeline.sh
 --path=/root/hcppilelines/piano_hcp/working_directory --subject=101 
--fmriname=rfMRI_REST_AP --lowresmesh=32 --fmrires=2 --smoothingFWHM=4 
--grayordinatesres=2 --regname=MSMSulc

the TR is 0.75, and the resolution is 2 mm isometric.
as i wrote, this went smoothly with the previous release of the pipelines, and 
resulted in data that looked great

thanks,
Shachar




On Sun, 31 Mar 2019 at 17:53, Glasser, Matthew 
mailto:glass...@wustl.edu>> wrote:
How did you call fMRISurface and what kind of data is this in terms of spatial 
and temporal resolution?  Do the input and template data line up and look 
reasonable?

Matt

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 8:27 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

some further information that might be useful - the output file 
rfMRI_REST_AP_temp_atlas.dtseries.nii, has already been created. its in the 
target folder. but the command won't stop "running"

On Sun, 31 Mar 2019 at 16:13, Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
Dear experts,

while running the fMRISurface pipeline (using the latest pipeline release) I 
encountered an issue where the subcortical-processing script takes a really 
long while (now running for more than 6 hours).
this is the command on which it is stuck

${CARET7DIR}/wb_command -cifti-resample 
${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii COLUMN 
${ResultsFolder}/${NameOffMRI}_temp_template.dlabel.nii COLUMN ADAP_BARY_AREA 
CUBIC ${ResultsFolder}/${NameOffMRI}_temp_atlas.dtseries.nii -volume-predilate 
10
rm -f ${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii

the wb_command version is 1.3.2
there is no error being raise or anything. its still running, but its ages...
the thing is, when running the older version of the pipeline (but the same 
workbench version), there was no problem.
any ideas?

ill report later on if it completes the command eventually

thanks,
Shachar



___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

On Mar 31, 2019 18:13, "Glasser, Matthew" 
mailto:glass...@wustl.edu>> wrote:
How many volumes and did you look at the data being input yet?

This code didn’t change so we don’t expect any difference.

Matt.

From: Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 20

Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing taking suspiciously long while

2019-03-31 Thread Glasser, Matthew
How did you call fMRISurface and what kind of data is this in terms of spatial 
and temporal resolution?  Do the input and template data line up and look 
reasonable?

Matt

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Shachar Gal mailto:gal.shac...@gmail.com>>
Date: Sunday, March 31, 2019 at 8:27 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] wb_command -cifti-resample in sub-cortical processing 
taking suspiciously long while

some further information that might be useful - the output file 
rfMRI_REST_AP_temp_atlas.dtseries.nii, has already been created. its in the 
target folder. but the command won't stop "running"

On Sun, 31 Mar 2019 at 16:13, Shachar Gal 
mailto:gal.shac...@gmail.com>> wrote:
Dear experts,

while running the fMRISurface pipeline (using the latest pipeline release) I 
encountered an issue where the subcortical-processing script takes a really 
long while (now running for more than 6 hours).
this is the command on which it is stuck

${CARET7DIR}/wb_command -cifti-resample 
${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii COLUMN 
${ResultsFolder}/${NameOffMRI}_temp_template.dlabel.nii COLUMN ADAP_BARY_AREA 
CUBIC ${ResultsFolder}/${NameOffMRI}_temp_atlas.dtseries.nii -volume-predilate 
10
rm -f ${ResultsFolder}/${NameOffMRI}_temp_subject_smooth.dtseries.nii

the wb_command version is 1.3.2
there is no error being raise or anything. its still running, but its ages...
the thing is, when running the older version of the pipeline (but the same 
workbench version), there was no problem.
any ideas?

ill report later on if it completes the command eventually

thanks,
Shachar



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] HCP-Users] Extracting mean relative motion from XNAT

2019-03-29 Thread Glasser, Matthew
I think you can use this file:

${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/Movement_RelativeRMS.txt

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Martina Jonette Lund 
mailto:m.j.l...@medisin.uio.no>>
Date: Friday, March 29, 2019 at 9:39 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] HCP-Users] Extracting mean relative motion from XNAT


Dear HCP experts,

I am working on the processed rsfMRI data that is part of the 
HCP1200_Parcellation_Timeseries_Netmats package, and I have a question 
regarding extraction of mean relative motion for these subjects.


I want to include mean relative motion as a covariate in my analysis but I am 
having some difficulties downloading this measure from the XNAT site (I can't 
seem to find the relevant path for all of the subjects). How do you recommend I 
proceed to download this for each subject's four rsfMRI sessions?


Thanks in advance for your help.


Kind regards

Martina Lund

CoE NORMENT

​


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Question on fsl 6.0.1

2019-03-29 Thread Glasser, Matthew
We expect these to work together however.  These announcements were made
on the various lists over the past couple of weeks.

Matt.

On 3/29/19, 1:10 PM, "hcp-users-boun...@humanconnectome.org on behalf of
Harms, Michael"  wrote:

>
>The MR-FIX releases as part of HCPpipelines v4.0.0 should work with FSL
>6.0.1.
>
>As far as MSMAll and FSL 6.0.1, to my knowledge, we haven't explicitly
>tested that combination yet.
>
>Cheers,
>-MH
>
>--
>Michael Harms, Ph.D.
>
>---
>
>Associate Professor of Psychiatry
>Washington University School of Medicine
>Department of Psychiatry, Box 8134
>660 South Euclid Ave.Tel: 314-747-6173
>St. Louis, MO  63110  Email: mha...@wustl.edu
>
>On 3/29/19, 12:30 PM, "hcp-users-boun...@humanconnectome.org on behalf of
>Marta Moreno" mmorenoort...@icloud.com> wrote:
>
>Dear Experts,
>
>Is FSL 6.0.1 ready for MR ICA+FIX and MSMAII? I have not seen any post
>yet in here.
>
>Thanks,
>
>Leah.
>
>Sent from my iPhone
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
>The materials in this message are private and may contain Protected
>Healthcare Information or other information of a sensitive nature. If you
>are not the intended recipient, be advised that any unauthorized use,
>disclosure, copying or the taking of any action in reliance on the
>contents of this information is strictly prohibited. If you have received
>this email in error, please immediately notify the sender via telephone
>or return mail.
>
>___
>HCP-Users mailing list
>HCP-Users@humanconnectome.org
>http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


  1   2   3   4   5   6   7   8   9   10   >