[HCP-Users] MSMAll for MultiRunICAFIX

2018-10-09 Thread Sang-Young Kim
Dear Experts:

I have a question for MSMAll alignment for MultiRunICAFIX cleaned data. 
Once I run the MultiRunICAFIX pipeline, I get the output like 
"${fMRIConcatName}_Atlas_hp2000_clean.dtseries.nii". 

But I want to align the data (e.g., 
${fMRIConcatName}_Atlas_hp2000_clean.dtseries.nii) using MSMAll pipeline. 
Running MSMAllPipeline script with above data seems to be fine without any 
error. However, if I run DeDriftAndResamplePipeline script, 
it requires ${fMRIConcatName}.${Hemisphere}.native.func.gii, which is absent in 
${fMRIConcatName} folder.

I can generate ${fMRIConcatName}.${Hemisphere}.native.func.gii using 
fMRISurfacePipeline. 
But I’m just wondering that I should run fMRISurfacePipeline for 
MultiRunICAFIX-cleaned data. 

This might be silly question as I don’t clearly understand MSMAllPipeline and 
DeDriftAndResamplePipeline. 
Is there a method to have MSMAll-aligned MultiRunICAFIX-cleaned data?

Below is the scripts that I ran:

## MultiRunICAFIX 
StudyFolder="/Volumes/easystore/projects/HCP"
Subject="300"
ConcatName1="rfMRI_REST_Concat_Day1"
ConcatName2="rfMRI_REST_Concat_Day2"
ResultFolder="${StudyFolder}/${Subject}/MNINonlinear/Results"
ICAFIXscriptFolder="/Users/sang-young/projects/Pipelines_master/ICAFIX"
 
${ICAFIXscriptFolder}/hcp_fix_multi_run 
${ResultFolder}/rfMRI_REST1_AP/rfMRI_REST1_AP.nii.gz@${ResultFolder}/rfMRI_REST1_PA/rfMRI_REST1_PA.nii.gz@${ResultFolder}/rfMRI_REST2_AP/rfMRI_REST2_AP.nii.gz@${ResultFolder}/rfMRI_REST2_PA/rfMRI_REST2_PA.nii.gz
 2000 ${ResultFolder}/${ConcatName1}/${ConcatName1}.nii.gz 
${FSL_FIXDIR}/training_files/HCP_hp2000.RData

${ICAFIXscriptFolder}/hcp_fix_multi_run 
${ResultFolder}/rfMRI_REST3_AP/rfMRI_REST3_AP.nii.gz@${ResultFolder}/rfMRI_REST3_PA/rfMRI_REST3_PA.nii.gz@${ResultFolder}/rfMRI_REST4_AP/rfMRI_REST4_AP.nii.gz@${ResultFolder}/rfMRI_REST4_PA/rfMRI_REST4_PA.nii.gz
 2000 ${ResultFolder}/${ConcatName2}/${ConcatName2}.nii.gz 
${FSL_FIXDIR}/training_files/HCP_hp2000.RData

##MSMAllPipeline
fMRINames="rfMRI_REST_Concat_Day1 rfMRI_REST_Concat_Day2"
OutfMRIName="rfMRI_REST"
HighPass="2000"
fMRIProcSTRING="_Atlas_hp2000_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1"

${HCPPIPEDIR}/MSMAll/MSMAllPipeline.sh \
  --path=${StudyFolder} \
  --subject=${Subject} \
  --fmri-names-list=${fMRINames} \
  --output-fmri-name=${OutfMRIName} \
  --high-pass=${HighPass} \
  --fmri-proc-string=${fMRIProcSTRING} \
  --msm-all-templates=${MSMAllTemplates} \
  --output-registration-name=${RegName} \
  --high-res-mesh=${HighResMesh} \
  --low-res-mesh=${LowResMesh} \
  --input-registration-name=${InRegName} \
  --matlab-run-mode=${MatlabMode}

Thanks. 

Sang-Young



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Question about MSMAll pipeline output

2018-10-05 Thread Sang-Young Kim
Dear Experts:

We’re acquiring HCP-style data, and running HCP pipelines. We are following the 
order of pipelines as Matt suggested. 
But I have concerns about final output of MSMAll pipeline for rfMRI data set. 
Once we run the MSMAllPipeline and DeDriftAndResamplePipeline, we get final 
output as "${fMRIName}_Atlas_MSMAll.dtseries.nii". 

But I expected that the output would be 
${fMRIName}_Atlas_MSMAll_hp2000_clean.dtseries.nii, as HCP_S1200 release data. 
I couldn’t find where it was wrong. Below is the variables that I set to run 
MSMAllPipeline as well as DeDriftAndResamplePipeline. 

***
1. MSMAllPipeline
StudyFolder="/Volumes/easystore/projects/HCP"
Subjlist="264"
fMRINames="rfMRI_REST1_AP@rfMRI_REST1_PA@rfMRI_REST2_AP@rfMRI_REST2_PA@rfMRI_REST3_AP@rfMRI_REST3_PA@rfMRI_REST4_AP@rfMRI_REST4_PA"
OutfMRIName="rfMRI_REST"
HighPass="2000"
fMRIProcSTRING="_Atlas_hp2000_clean"
MSMAllTemplates="${HCPPIPEDIR}/global/templates/MSMAll"
RegName="MSMAll_InitalReg"
HighResMesh="164"
LowResMesh="32"
InRegName="MSMSulc"
MatlabMode="1"

${HCPPIPEDIR}/MSMAll/MSMAllPipeline.sh \
  --path=${StudyFolder} \
  --subject=${Subject} \
  --fmri-names-list=${fMRINames} \
  --output-fmri-name=${OutfMRIName} \
  --high-pass=${HighPass} \
  --fmri-proc-string=${fMRIProcSTRING} \
  --msm-all-templates=${MSMAllTemplates} \
  --output-registration-name=${RegName} \
  --high-res-mesh=${HighResMesh} \
  --low-res-mesh=${LowResMesh} \
  --input-registration-name=${InRegName} \
  --matlab-run-mode=${MatlabMode}

2. DeDriftAndResamplePipeline
StudyFolder="/Volumes/easystore/projects/HCP"
Subjlist="264"
HighResMesh="164"
LowResMesh="32"
RegName="MSMAll_InitalReg_2_d40_WRN"
DeDriftRegFiles="${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.L.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii@${HCPPIPEDIR}/global/templates/MSMAll/DeDriftingGroup.R.sphere.DeDriftMSMAll.164k_fs_LR.surf.gii"
oncatRegName="MSMAll"
Maps="sulc curvature corrThickness thickness"
MyelinMaps="MyelinMap SmoothedMyelinMap"
rfMRINames="rfMRI_REST1_AP rfMRI_REST1_PA rfMRI_REST2_AP rfMRI_REST2_PA 
rfMRI_REST3_AP rfMRI_REST3_PA rfMRI_REST4_AP rfMRI_REST4_PA"
tfMRINames="tfMRI_WM_AP tfMRI_WM_PA tfMRI_MOTOR_AP tfMRI_MOTOR_PA"
SmoothingFWHM="2"
HighPass="2000"
MatlabMode="1"

${HCPPIPEDIR}/DeDriftAndResample/DeDriftAndResamplePipeline.sh \
  --path=${StudyFolder} \
  --subject=${Subject} \
  --high-res-mesh=${HighResMesh} \
  --low-res-meshes=${LowResMesh} \
  --registration-name=${RegName} \
  --dedrift-reg-files=${DeDriftRegFiles} \
  --concat-reg-name=${ConcatRegName} \
  --maps=${Maps} \
  --myelin-maps=${MyelinMaps} \
  --rfmri-names=${rfMRINames} \
  --tfmri-names=${tfMRINames} \
  --smoothing-fwhm=${SmoothingFWHM} \
  --highpass=${HighPass} \
  --matlab-run-mode=${MatlabMode}
*

In HCP_S1200 released data, I can see two files, which are named as 
${fMRIName}_Atlas_MSMAll.dtseries.nii and 
${fMRIName}_Atlas_MSMAll_hp2000_clean.dtseries.nii. That’s why I’m confused 
with our output. 

I wonder that the output "${fMRIName}_Atlas_MSMAll.dtseries.nii" generated by 
above pipeline is also MSMAll aligned and cleaned cifti file?

Could anyone give me comments about what is wrong with my processing pipeline?

Thanks. 

Sang-Young



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Question about Multi-Modal Surface Mapping

2018-10-01 Thread Sang-Young Kim
Thanks! Matt!

I have downloaded resting state data from HCP_S1200 release, which has been 
preprocessed with fMRIVolume and fMRISurface pipeline, but not ICA+FIX cleaned.
There is a data "rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii" in the folder 
($StudyFolder/$Subject/MNINonLinear/Results/rfMRI_REST1_LR). 

1. I just want to confirm that the data has not been processed with ICA+FIX. 
Because I’m confusing as MSMAll pipeline is the last step for resting-state 
fMRI pipeline. 

2. In addition, I know that resting-state MSM alignment is initialized using 
MSM-sulc followed by MSM-myelin. 
If the resting state data has not been cleaned using ICA+FIX, how does the 
individual ICA spatial map of noise-contaminated data accurately being 
registered to group ICA template? I mean just in case that the quality of 
individual spatial map of ICA component is bad. The alignment should have large 
error, right? Are there any methods to handle this kind of data?

Any comments would be greatly appreciated. 

Best, 

Sang-Young



> On Sep 29, 2018, at 2:48 PM, Glasser, Matthew  wrote:
> 
> You want to run MSMAll on sICA+FIX cleaned data for sure.  As far as your 
> question about the MSMAll pipeline inputs, I would need to see what you have.
> 
> Matt.
> 
> From:  <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Sang-Young Kim 
> mailto:sykim...@gmail.com>>
> Date: Friday, September 28, 2018 at 6:55 PM
> To: Timothy Coalson mailto:tsc...@mst.edu>>
> Cc: hcp-users  <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Question about Multi-Modal Surface Mapping
> 
> Thanks for prompt response! Tim!
> 
> Yes, the volume files are individual results. And this data is not part of 
> HCP S1200 released data, but for the our data. 
> That’s why "SubjID=300" seems to be unclear to you. 
> 
> I have one more quick question. If I want to have MSMAll-registered data for 
> "$StudyFolder/$Subject/MNINonLinear/Results/rfMRI_REST1_AP/rfMRI_REST1_AP_Atlas.dtseries.nii"
>  (which is not FIX-cleaned data), 
> 
> do I have to run MSMAll pipeline with changing input variable name? or is 
> there a simple command for getting that as volume-to-surface-mapping?
> 
> Thanks again. 
> 
> Sang-Young  
> 
>> On Sep 28, 2018, at 7:42 PM, Timothy Coalson > <mailto:tsc...@mst.edu>> wrote:
>> 
>> If the volume files are per-individual results, then mapping them to that 
>> individual's own MSMAll surfaces will result in them being accurately 
>> registered through multimodal surface matching, yes.  I am not clear on the 
>> "SubjID=300" line, though.
>> 
>> To be clear, group average "subjects" do not count for this process, and any 
>> group MNI space volume results for all of cortex can't be accurately mapped 
>> to surfaces (the accuracy has already been lost due to volume-based group 
>> averaging).
>> 
>> However, the typical HCP process is to put the unprocessed (well, 
>> preprocessed) individual cortical data onto surfaces via the individual's 
>> own surfaces, and then compute results in cifti space, not MNI.  We 
>> typically use the native mesh surfaces to achieve the most fidelity to the 
>> original segmentation, and then resample afterward, but the 32k surfaces are 
>> probably good enough.
>> 
>> Tim
>> 
>> 
>> On Fri, Sep 28, 2018 at 6:04 PM, Sang-Young Kim > <mailto:sykim...@gmail.com>> wrote:
>>> Dear Experts:
>>> 
>>> I have a certain kinds of volume maps in MNI space. I would like to mapping 
>>> this map to surface. 
>>> I know how to use wb_command -volume-to-surface-mapping. 
>>> My question is that if I use following command, is this also multi-modal 
>>> surface matched map?
>>> 
>>> StudyFolder="/Volumes/easystore/projects/HCP"
>>> SubjID="300"
>>> AtlasFolder="$StudyFolder/$SubjID/MNINonLinear/fsaverage_LR32k"
>>> ResultFolder="$StudyFolder/$SubjID/MNINonLinear/Results/rfMRI_REST1_AP"
>>> MapName="rfMRI_REST1_AP_map_tstat.nii.gz"
>>> 
>>> # Left hemi
>>> /Applications/workbench/bin_macosx64/wb_command -volume-to-surface-mapping \
>>> $ResultFolder/$MapName \
>>> $AtlasFolder/${SubjID}.L.midthickness_MSMAll.32k_fs_LR.surf.gii \
>>> $ResultFolder/L.card_coupling_tstat.32k_fs_LR.func.gii \
>>> -ribbon-constrained 
>>> $AtlasFolder/${SubjID}.L.white_MSMAll.32k_fs_LR.surf.gii \
>>> $AtlasFolder/${SubjID}.L.pial_MSMAll.32k_fs_LR.surf.gii \
>>> 
>>> 
>>> If not, could anyone please let me know how to

Re: [HCP-Users] Question about Multi-Modal Surface Mapping

2018-09-28 Thread Sang-Young Kim
Thanks for prompt response! Tim!

Yes, the volume files are individual results. And this data is not part of HCP 
S1200 released data, but for the our data. 
That’s why "SubjID=300" seems to be unclear to you. 

I have one more quick question. If I want to have MSMAll-registered data for 
"$StudyFolder/$Subject/MNINonLinear/Results/rfMRI_REST1_AP/rfMRI_REST1_AP_Atlas.dtseries.nii"
 (which is not FIX-cleaned data), 

do I have to run MSMAll pipeline with changing input variable name? or is there 
a simple command for getting that as volume-to-surface-mapping?

Thanks again. 

Sang-Young  

> On Sep 28, 2018, at 7:42 PM, Timothy Coalson  wrote:
> 
> If the volume files are per-individual results, then mapping them to that 
> individual's own MSMAll surfaces will result in them being accurately 
> registered through multimodal surface matching, yes.  I am not clear on the 
> "SubjID=300" line, though.
> 
> To be clear, group average "subjects" do not count for this process, and any 
> group MNI space volume results for all of cortex can't be accurately mapped 
> to surfaces (the accuracy has already been lost due to volume-based group 
> averaging).
> 
> However, the typical HCP process is to put the unprocessed (well, 
> preprocessed) individual cortical data onto surfaces via the individual's own 
> surfaces, and then compute results in cifti space, not MNI.  We typically use 
> the native mesh surfaces to achieve the most fidelity to the original 
> segmentation, and then resample afterward, but the 32k surfaces are probably 
> good enough.
> 
> Tim
> 
> 
> On Fri, Sep 28, 2018 at 6:04 PM, Sang-Young Kim  <mailto:sykim...@gmail.com>> wrote:
> Dear Experts:
> 
> I have a certain kinds of volume maps in MNI space. I would like to mapping 
> this map to surface. 
> I know how to use wb_command -volume-to-surface-mapping. 
> My question is that if I use following command, is this also multi-modal 
> surface matched map?
> 
> StudyFolder="/Volumes/easystore/projects/HCP"
> SubjID="300"
> AtlasFolder="$StudyFolder/$SubjID/MNINonLinear/fsaverage_LR32k"
> ResultFolder="$StudyFolder/$SubjID/MNINonLinear/Results/rfMRI_REST1_AP"
> MapName="rfMRI_REST1_AP_map_tstat.nii.gz"
> 
> # Left hemi
> /Applications/workbench/bin_macosx64/wb_command -volume-to-surface-mapping \
> $ResultFolder/$MapName \
> $AtlasFolder/${SubjID}.L.midthickness_MSMAll.32k_fs_LR.surf.gii \
> $ResultFolder/L.card_coupling_tstat.32k_fs_LR.func.gii \
> -ribbon-constrained $AtlasFolder/${SubjID}.L.white_MSMAll.32k_fs_LR.surf.gii \
> $AtlasFolder/${SubjID}.L.pial_MSMAll.32k_fs_LR.surf.gii \
> 
> 
> If not, could anyone please let me know how to do that?
> 
> Thanks in advance. 
> 
> Best, 
> 
> Sang-Young 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Question about Multi-Modal Surface Mapping

2018-09-28 Thread Sang-Young Kim
Dear Experts:

I have a certain kinds of volume maps in MNI space. I would like to mapping 
this map to surface. 
I know how to use wb_command -volume-to-surface-mapping. 
My question is that if I use following command, is this also multi-modal 
surface matched map?

StudyFolder="/Volumes/easystore/projects/HCP"
SubjID="300"
AtlasFolder="$StudyFolder/$SubjID/MNINonLinear/fsaverage_LR32k"
ResultFolder="$StudyFolder/$SubjID/MNINonLinear/Results/rfMRI_REST1_AP"
MapName="rfMRI_REST1_AP_map_tstat.nii.gz"

# Left hemi
/Applications/workbench/bin_macosx64/wb_command -volume-to-surface-mapping \
$ResultFolder/$MapName \
$AtlasFolder/${SubjID}.L.midthickness_MSMAll.32k_fs_LR.surf.gii \
$ResultFolder/L.card_coupling_tstat.32k_fs_LR.func.gii \
-ribbon-constrained $AtlasFolder/${SubjID}.L.white_MSMAll.32k_fs_LR.surf.gii \
$AtlasFolder/${SubjID}.L.pial_MSMAll.32k_fs_LR.surf.gii \


If not, could anyone please let me know how to do that?

Thanks in advance. 

Best, 

Sang-Young 
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Where to find warp field from standard to original resting state fMRI data

2018-07-24 Thread Sang-Young Kim
Hi, Matt:

Thanks for your prompt response. 

Yes, I can use 
${StudyFolder}/${Subject}/MNINonLinear/xfms/standard2acpc_dc.nii.gz to warp the 
data to structural space. 
But the problem is that I want to keep the same number of slice as original 
fMRI data (e.g., N slice=72).

Once I warped rfMRI data to structural space using 2 mm reference volume, the 
number of slice for the output is 91. 

Is there a way to solve this issue? Also the warp field for 
standard2rfMRI_REST1_LR.nii.gz is not released, right? The reason is, as you 
mentioned, that there is no need to go to original fMRI space?

Any comments would be appreciated. 

Best, 

Sang-Young
 


> On Jul 24, 2018, at 3:03 PM, Glasser, Matthew  wrote:
> 
> There is no need to go to original fMRI space in most cases.  If you want
> data in the native physical space of the subject you can use
> ${StudyFolder}/${Subject}/MNINonLinear/xfms/standard2acpc_dc.nii.gz to
> warp the data to structural space (and can use a 2mm reference volume in
> that space if you wish to keep it at the same resolution).
> 
> Peace,
> 
> Matt.
> 
> On 7/24/18, 12:27 PM, "hcp-users-boun...@humanconnectome.org on behalf of
> Sang-Young Kim"  sykim...@gmail.com> wrote:
> 
>> Dear HCP experts:
>> 
>> I have downloaded a HCP data from "S1200 Extensively Processed fMRI
>> Data".
>> The downloaded data was just preprocessed for resting state fMRI using
>> fMRIVolume and fMRISurface pipeline.
>> 
>> But I cannot find warp field to transform MNI space data to original fMRI
>> space, which is normally located in ${Subject}/MNINonLinear/xfms after
>> fMRIVolume pipeline.
>> 
>> Could you please let me know where to find it?
>> 
>> Thanks.
>> 
>> Sang-Young Kim, Ph.D.
>> 
>> Postdoctoral Fellow
>> Department of Radiology
>> University of Pittsburgh
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 
> 
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Where to find warp field from standard to original resting state fMRI data

2018-07-24 Thread Sang-Young Kim
Dear HCP experts:

I have downloaded a HCP data from "S1200 Extensively Processed fMRI Data". 
The downloaded data was just preprocessed for resting state fMRI using 
fMRIVolume and fMRISurface pipeline. 

But I cannot find warp field to transform MNI space data to original fMRI 
space, which is normally located in ${Subject}/MNINonLinear/xfms after 
fMRIVolume pipeline. 

Could you please let me know where to find it? 

Thanks. 

Sang-Young Kim, Ph.D.

Postdoctoral Fellow
Department of Radiology
University of Pittsburgh
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] tfMRI volume-based analysis

2017-12-12 Thread Sang-Young Kim
Dear experts:

I’m trying to do tfMRI volume-based analysis and have tried to run the script 
"TaskfMRIAnalysisBatch.sh". 
When I set the variable VolumeBasedProcessing="NO" in the script, I was able to 
run the tfMRI data on grayordinate space without any error. But if I changed 
the variable to "YES", the error came up without success. Please see below:

***
Warning: Number of voxels = 0. Spatial smoothing of autocorrelation estimates 
is not carried out
Warning: Number of voxels = 0. Spatial smoothing of autocorrelation estimates 
is not carried out

While running:
/Applications/workbench/bin_macosx64/../macosx64_apps/wb_command.app/Contents/MacOS/wb_command
 -cifti-convert-to-scalar 
/Volumes/easystore/projects/HCP/209/MNINonLinear/Results/tfMRI_MOTOR/tfMRI_MOTOR_hp200_s2_level2.feat/StandardVolumeStats/cope1.feat/zstat1.dtseries.nii
 ROW 
/Volumes/easystore/projects/HCP/209/MNINonLinear/Results/tfMRI_MOTOR/tfMRI_MOTOR_hp200_s2_level2.feat/209_tfMRI_MOTOR_level2_CUE_hp200_s2.dscalar.nii
 -name-file 
/Volumes/easystore/projects/HCP/209/MNINonLinear/Results/tfMRI_MOTOR/tfMRI_MOTOR_hp200_s2_level2.feat/Contrasttemp.txt

ERROR: failed to open file 
'/Volumes/easystore/projects/HCP/209/MNINonLinear/Results/tfMRI_MOTOR/tfMRI_MOTOR_hp200_s2_level2.feat/StandardVolumeStats/cope1.feat/zstat1.dtseries.nii',
 file does not exist, or folder permissions prevent seeing it


I have tried to figure out why this error occurred, but I couldn’t figure out. 
Any insights would be greatly appreciated. 

Thanks. 

Sang-Young
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] tfMRI: Converting EPrime text file to EVs

2017-11-14 Thread Sang-Young Kim
Dear Experts:

I’m new to analyze task-based fMRI data. We have acquired tfMRI data with 
stimulus paradigm similar to HCP protocol.
And we have EPrime text text file showing stimulus information. But I’m 
struggling to make EV files from EPrime text file. 

Is there a script available for that purpose? Or could someone kindly provide 
an example script for converting EPrime text file to EV file (e.g., 
${Subject}/unprocessed/${tfMRIName}/LINKED_DATA/EPRIME/EVs)?
If not available, please give me some advise how to do that.

Thanks in advance. 

Sang-Young Kim, Ph.D

Postdoctoral Fellow
University of Pittsburgh


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-11 Thread Sang-Young Kim
Dear Matt:

I turned off the array size limit in Matlab, then ran again hcp_fix_multi_run. 
I cannot see any error in .fix.log file but cleaned data was not generated. One 
thing I can notice in the terminal is that some process is killed 
automatically. 
Please see below:

FIX Applying cleanup using cleanup file: 
rfMRI_REST_All_hp2000.ica/fix4melview_HCP_hp2000_thr10.txt and motion cleanup 
set to 1
sh: line 1:  7063 Killed: 9   
/Applications/MATLAB_R2015b.app/bin/matlab -nojvm -nodisplay -nodesktop 
-nosplash -r "restoredefaultpath; addpath('/usr/local/fsl/etc/matlab'); 
addpath('/Users/sang-young/projects/Pipelines/global/matlab'); 
addpath('/usr/local/fix'); fix_3_clean('.fix',0,1,-1)" >> .fix.log 2>&1
Start=1 Stop=420

Do you have any idea why this happen and how to fix it?

Many thanks. 

Sang-Young

 


> On Oct 9, 2017, at 6:19 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> I think you will need to go in the matlab preferences and fix that.  I don’t 
> recall exactly where it is, but it should be easy to find on google.  
> Basically you need to turn off the array size limit.  Alternatively you need 
> to run the job on a machine with more RAM.
> 
> Peace,
> 
> Matt.
> 
> From: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
> Date: Monday, October 9, 2017 at 9:58 AM
> To: Matt Glasser <glass...@wustl.edu <mailto:glass...@wustl.edu>>
> Cc: Stephen Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>, 
> "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
> directions
> 
> Dear Matt:
> 
> I ran the multi-run ICA+FIX according to your guideline. But at the last 
> stage of cleaning the data (e.g., fix_3_clean), I got an error message in the 
> file of .fix.log. Please see below:
> 
> Could you please help me how to handle this issue?
> 
> Thanks. 
> 
> Sang-Young
> 
> TR =
> 
> 0.8000
> 
> Elapsed time is 24.545680 seconds.
> Elapsed time is 2.739398 seconds.
> {^HError using '
> Requested 902629x3360 (22.6GB) array exceeds maximum array size preference.
> Creation of arrays greater than this limit may take a long time and cause
> MATLAB to become unresponsive. See array size 
> limit
> or preference panel for more information.
> 
> Error in fix_3_clean (line 67)
>   cts=reshape(cts,ctsX*ctsY*ctsZ,ctsT)’;
> 
> 
> 
> 
> 
>> On Oct 6, 2017, at 4:40 PM, Glasser, Matthew <glass...@wustl.edu 
>> <mailto:glass...@wustl.edu>> wrote:
>> 
>> No multi-run ICA+FIX handles the concatenation for you so you specify the 
>> separate runs.  That is the whole point.  Have a look at this bioRvix paper 
>> in the methods about multi-run ICA+FIX so you understand why it is 
>> implemented the way it is:
>> 
>> https://www.biorxiv.org/content/early/2017/09/27/193862 
>> <https://www.biorxiv.org/content/early/2017/09/27/193862> 
>> 
>> Here is an example call to multi-run ICA+FIX:
>> 
>> ${FSL_FIXDIR}/hcp_fix_multi_run 
>> ${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz
>>  2000 
>> ${StudyFolder}/${Subject}/MNINonLinear/Results/${ConcatName}/${ConcatName}.nii.gz
>> 
>> Peace,
>> 
>> Matt.
>> 
>> From: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
>> Date: Friday, October 6, 2017 at 4:17 PM
>> To: Matt Glasser <glass...@wustl.edu <mailto:glass...@wustl.edu>>
>> Cc: Stephen Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>, 
>> "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
>> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
>> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
>> directions
>> 
>> Thanks! Matt!
>> 
>> I have one more following-up question. In order to run the script 
>> "hcp_fix_multi_run", we have to concatenate all the data temporally, right?
>> I combined the data using following command: fslmerge -t  
>>  …. 
>> 
>> Then, I ran the hcp_fix_multi_run script as follow: hcp_fix_multi_run 
>>  2000 rfMRI_REST_Concat 
>> ${FSL_FIXDIR}/training_files

Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-09 Thread Sang-Young Kim
Dear Matt:

I ran the multi-run ICA+FIX according to your guideline. But at the last stage 
of cleaning the data (e.g., fix_3_clean), I got an error message in the file of 
.fix.log. Please see below:

Could you please help me how to handle this issue?

Thanks. 

Sang-Young

TR =

0.8000

Elapsed time is 24.545680 seconds.
Elapsed time is 2.739398 seconds.
{^HError using '
Requested 902629x3360 (22.6GB) array exceeds maximum array size preference.
Creation of arrays greater than this limit may take a long time and cause
MATLAB to become unresponsive. See array size limit
or preference panel for more information.

Error in fix_3_clean (line 67)
  cts=reshape(cts,ctsX*ctsY*ctsZ,ctsT)’;





> On Oct 6, 2017, at 4:40 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> No multi-run ICA+FIX handles the concatenation for you so you specify the 
> separate runs.  That is the whole point.  Have a look at this bioRvix paper 
> in the methods about multi-run ICA+FIX so you understand why it is 
> implemented the way it is:
> 
> https://www.biorxiv.org/content/early/2017/09/27/193862 
> <https://www.biorxiv.org/content/early/2017/09/27/193862> 
> 
> Here is an example call to multi-run ICA+FIX:
> 
> ${FSL_FIXDIR}/hcp_fix_multi_run 
> ${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz@${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}.nii.gz
>  2000 
> ${StudyFolder}/${Subject}/MNINonLinear/Results/${ConcatName}/${ConcatName}.nii.gz
> 
> Peace,
> 
> Matt.
> 
> From: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
> Date: Friday, October 6, 2017 at 4:17 PM
> To: Matt Glasser <glass...@wustl.edu <mailto:glass...@wustl.edu>>
> Cc: Stephen Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>, 
> "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
> directions
> 
> Thanks! Matt!
> 
> I have one more following-up question. In order to run the script 
> "hcp_fix_multi_run", we have to concatenate all the data temporally, right?
> I combined the data using following command: fslmerge -t  
>  …. 
> 
> Then, I ran the hcp_fix_multi_run script as follow: hcp_fix_multi_run 
>  2000 rfMRI_REST_Concat 
> ${FSL_FIXDIR}/training_files/HCP_hp2000.RData
> I also tried to input the data as a list file, but the script didn’t work 
> with error. It seems to get only one 4D rfMRI data set as input. 
> That’s why I concatenated each run of rfMRI data set. 
> 
> Could you please confirm whether the above procedures are correct or not?
> 
> Thanks again. 
> 
> Sang-Young
>  
> 
>> On Oct 6, 2017, at 3:57 PM, Glasser, Matthew <glass...@wustl.edu 
>> <mailto:glass...@wustl.edu>> wrote:
>> 
>> I would not do #2 as you need to do some preprocessing prior to running 
>> ICA+FIX when concatenating across runs and this is all that the multi-run 
>> ICA+FIX pipeline does differently from regular ICA+FIX.
>> 
>> I’ll let Steve answer that other question.
>> 
>> Matt.
>> 
>> From: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
>> Date: Friday, October 6, 2017 at 11:06 AM
>> To: Matt Glasser <glass...@wustl.edu <mailto:glass...@wustl.edu>>, Stephen 
>> Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
>> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
>> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
>> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
>> directions
>> 
>> Hi, Matt and Stephen:
>> 
>> Thanks for your responses. So I will try below three options to see which 
>> one is better. 
>> 
>> 1. ICA+FIX on each 5 min run separately 
>> 2. Concatenate each pair of scans from each session and then ICA+FIX on each 
>> session
>> 3. Use multi-run ICA+FIX to combine across runs
>> 
>> I have another simple question. In the paper published in NeuroImage (Smith 
>> et al., 2013, rfMRI in the HCP), the Figure 9 shows functional connectivity 
>> in the default mode network; one is from 15 min run from single subject, 
>> another one is from 4*15 min runs from single subject, and the other one is 
>> from all subjects concatenated. 
>> 

Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-06 Thread Sang-Young Kim
Thanks! Matt!

I have one more following-up question. In order to run the script 
"hcp_fix_multi_run", we have to concatenate all the data temporally, right?
I combined the data using following command: fslmerge -t  
 …. 

Then, I ran the hcp_fix_multi_run script as follow: hcp_fix_multi_run 
 2000 rfMRI_REST_Concat 
${FSL_FIXDIR}/training_files/HCP_hp2000.RData
I also tried to input the data as a list file, but the script didn’t work with 
error. It seems to get only one 4D rfMRI data set as input. 
That’s why I concatenated each run of rfMRI data set. 

Could you please confirm whether the above procedures are correct or not?

Thanks again. 

Sang-Young
 

> On Oct 6, 2017, at 3:57 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> I would not do #2 as you need to do some preprocessing prior to running 
> ICA+FIX when concatenating across runs and this is all that the multi-run 
> ICA+FIX pipeline does differently from regular ICA+FIX.
> 
> I’ll let Steve answer that other question.
> 
> Matt.
> 
> From: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
> Date: Friday, October 6, 2017 at 11:06 AM
> To: Matt Glasser <glass...@wustl.edu <mailto:glass...@wustl.edu>>, Stephen 
> Smith <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
> directions
> 
> Hi, Matt and Stephen:
> 
> Thanks for your responses. So I will try below three options to see which one 
> is better. 
> 
> 1. ICA+FIX on each 5 min run separately 
> 2. Concatenate each pair of scans from each session and then ICA+FIX on each 
> session
> 3. Use multi-run ICA+FIX to combine across runs
> 
> I have another simple question. In the paper published in NeuroImage (Smith 
> et al., 2013, rfMRI in the HCP), the Figure 9 shows functional connectivity 
> in the default mode network; one is from 15 min run from single subject, 
> another one is from 4*15 min runs from single subject, and the other one is 
> from all subjects concatenated. 
> 
> How did you process the data to present those figures (second and third one)?
> The third one should be processed with group ICA, right? 
> What about the second one? This is processed with method 3 on above list?
> 
> Thanks. 
> 
> Sang-Young
>   
> 
>> On Oct 6, 2017, at 5:40 AM, Glasser, Matthew <glass...@wustl.edu 
>> <mailto:glass...@wustl.edu>> wrote:
>> 
>> There is a beta version of a multi-run ICA+FIX pipeline available in the HCP 
>> Pipeline’s repository.  For 5 minute runs, I would expect combining across 
>> runs to be best.  We haven’t tested combining across sessions yet, so you 
>> would have to check that that was working okay if you wanted to try that.
>> 
>> Peace,
>> 
>> Matt.
>> 
>> From: <hcp-users-boun...@humanconnectome.org 
>> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Stephen Smith 
>> <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
>> Date: Friday, October 6, 2017 at 1:49 AM
>> To: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
>> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
>> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
>> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
>> directions
>> 
>> Hi - I think your main two choices are whether to run FIX on each 5min run 
>> separately, or to preprocess and concatenate each pair of scans from each 
>> session and run FIX for each of the 4 paired datasets.  You could try FIX 
>> both ways on a few subjects and decide which is working better.
>> 
>> Cheers.
>> 
>> 
>> 
>>> On 5 Oct 2017, at 22:53, Sang-Young Kim <sykim...@gmail.com 
>>> <mailto:sykim...@gmail.com>> wrote:
>>> 
>>> Dear Experts:
>>> 
>>> We have acquired rfMRI dataset with A-P and P-A phase encoding direction 
>>> and the data was acquired in eight 5-minute runs split across four imaging 
>>> sessions. We have processed the data using HCP pipelines (e.g., 
>>> PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and 
>>> ICA+FIX). So we have results for each run of rfMRI data. 
>>> 
>>> I’m just curious about what is recommended way to combine each run of data 
>>> 

Re: [HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-06 Thread Sang-Young Kim
Hi, Matt and Stephen:

Thanks for your responses. So I will try below three options to see which one 
is better. 

1. ICA+FIX on each 5 min run separately 
2. Concatenate each pair of scans from each session and then ICA+FIX on each 
session
3. Use multi-run ICA+FIX to combine across runs

I have another simple question. In the paper published in NeuroImage (Smith et 
al., 2013, rfMRI in the HCP), the Figure 9 shows functional connectivity in the 
default mode network; one is from 15 min run from single subject, another one 
is from 4*15 min runs from single subject, and the other one is from all 
subjects concatenated. 

How did you process the data to present those figures (second and third one)?
The third one should be processed with group ICA, right? 
What about the second one? This is processed with method 3 on above list?

Thanks. 

Sang-Young
  

> On Oct 6, 2017, at 5:40 AM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> There is a beta version of a multi-run ICA+FIX pipeline available in the HCP 
> Pipeline’s repository.  For 5 minute runs, I would expect combining across 
> runs to be best.  We haven’t tested combining across sessions yet, so you 
> would have to check that that  was working okay if you wanted to try that.
> 
> Peace,
> 
> Matt.
> 
> From: <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Stephen Smith 
> <st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>>
> Date: Friday, October 6, 2017 at 1:49 AM
> To: Sang-Young Kim <sykim...@gmail.com <mailto:sykim...@gmail.com>>
> Cc: "hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>" 
> <hcp-users@humanconnectome.org <mailto:hcp-users@humanconnectome.org>>
> Subject: Re: [HCP-Users] Combining rfMRI data for different phase encoding 
> directions
> 
> Hi - I think your main two choices are whether to run FIX on each 5min run 
> separately, or to preprocess and concatenate each pair of scans from each 
> session and run FIX for each of the 4 paired datasets.  You could try FIX 
> both ways on a few subjects and decide which is working better.
> 
> Cheers.
> 
> 
> 
>> On 5 Oct 2017, at 22:53, Sang-Young Kim <sykim...@gmail.com 
>> <mailto:sykim...@gmail.com>> wrote:
>> 
>> Dear Experts:
>> 
>> We have acquired rfMRI dataset with A-P and P-A phase encoding direction and 
>> the data was acquired in eight 5-minute runs split across four imaging 
>> sessions. We have processed the data using HCP pipelines (e.g., 
>> PreFreeSurfer, FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and 
>> ICA+FIX). So we have results for each run of rfMRI data. 
>> 
>> I’m just curious about what is recommended way to combine each run of data 
>> (e.g., rfMRI_REST1_AP, rfMRI_REST1_PA, …, rfMRI_REST4_AP, rfMRI_REST4_PA). 
>> 
>> Can we just temporally concatenate each run of data before running ICA+FIX?
>> Or can we do group ICA using each data processed with ICA+FIX?
>> What is the optimal way to do combining analysis across each run? 
>> 
>> Any insights would be greatly appreciated. 
>> 
>> Thanks. 
>> 
>> Sang-Young Kim
>> ***
>> Postdoctoral Research Fellow
>> Department of Radiology at University of Pittsburgh
>> email: sykim...@gmail.com <mailto:sykim...@gmail.com>
>> ***  
>> 
>> 
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 
> 
> ---
> Stephen M. Smith, Professor of Biomedical Engineering
> Head of Analysis,  Oxford University FMRIB Centre
> 
> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
> +44 (0) 1865 222726  (fax 222717)
> st...@fmrib.ox.ac.uk <mailto:st...@fmrib.ox.ac.uk>
> http://www.fmrib.ox.ac.uk/~steve <http://www.fmrib.ox.ac.uk/~steve>
> ---
> 
> Stop the cultural destruction of Tibet <http://smithinks.net/>
> 
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Combining rfMRI data for different phase encoding directions

2017-10-05 Thread Sang-Young Kim
Dear Experts:

We have acquired rfMRI dataset with A-P and P-A phase encoding direction and 
the data was acquired in eight 5-minute runs split across four imaging 
sessions. We have processed the data using HCP pipelines (e.g., PreFreeSurfer, 
FreeSurfer, PostFreeSurfer, fMRIVolume, fMRISurface and ICA+FIX). So we have 
results for each run of rfMRI data. 

I’m just curious about what is recommended way to combine each run of data 
(e.g., rfMRI_REST1_AP, rfMRI_REST1_PA, …, rfMRI_REST4_AP, rfMRI_REST4_PA). 

Can we just temporally concatenate each run of data before running ICA+FIX?
Or can we do group ICA using each data processed with ICA+FIX?
What is the optimal way to do combining analysis across each run? 

Any insights would be greatly appreciated. 

Thanks. 

Sang-Young Kim
***
Postdoctoral Research Fellow
Department of Radiology at University of Pittsburgh
email: sykim...@gmail.com
***  


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] CIFTI files do not contain volume data

2017-09-19 Thread Sang-Young Kim
Dear Matt:

Thanks for your reply. Now I can run PALM for testing myelin map difference 
between groups. 

Best regards,

Sang-Young


> On Sep 18, 2017, at 4:22 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> Myelin maps are only for the cortex so you can omit the -volume-all 
> data_sub.nii portion of the command.
> 
> Peace,
> 
> Matt.
> 
> From: <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Sang-Young Kim 
> <sykim...@gmail.com <mailto:sykim...@gmail.com>>
> Date: Monday, September 18, 2017 at 3:13 PM
> To: "HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>" 
> <HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>>
> Subject: [HCP-Users] CIFTI files do not contain volume data
> 
> Dear HCP experts:
> 
> I’m trying to compare myelin map between two groups using FSL Palm. I 
> generated each subject’s myelin map using the HCP pipelines scripts (e.g., 
> PreFreeSurfer, FreeSurfer, and PostFreeSurfer pipeline). Before running FSL 
> Palm, I have merged each subject myelin map using following command: 
> wb_command -cifti-merge. 
> 
> Then, to separate a CIFTI file into GIFTI and NIFTI, I ran the following 
> command:
> wb_command -cifti-separate data.dscalar.nii COLUMN -volume-all data_sub.nii 
> -metric CORTEX_LEFT data_L.func.gii -metric CORTEX_RIGHT data_R.func.gii
> 
> When I run above command, I get error message as below:
> ERROR: specified file and direction does not contain any volume data
> 
> So I checked the information of both merged myelin map data and each subject 
> data using wb_command -file-information. 
> I found that the CIFTI file really does not contain any volume data (Please 
> see below).
> 
> Name: data.dscalar.nii
> Type: Connectivity - Dense Scalar
> Structure:CortexLeft CortexRight 
> Data Size:4.28 Megabytes
> Maps to Surface:  true
> Maps to Volume:   false
> Maps with LabelTable: false
> Maps with Palette:true
> All Map Palettes Equal:   true
> Map Interval Units:   NIFTI_UNITS_UNKNOWN
> Number of Maps:   18
> Number of Rows:   59412
> Number of Columns:18
> Volume Dim[0]:0
> Volume Dim[1]:0
> Volume Dim[2]:0
> Palette Type: Map (Unique for each map)
> CIFTI Dim[0]: 18
> CIFTI Dim[1]: 59412
> ALONG_ROW map type:   SCALARS
> ALONG_COLUMN map type:BRAIN_MODELS
> Has Volume Data:  false
> CortexLeft:   29696 out of 32492 vertices
> CortexRight:  29716 out of 32492 vertices
> 
> Could anyone please explain what’s wrong with data processing?
> Since I didn't see any error messages when I ran the HCP pipeline, I have no 
> idea why CIFTI file does not contain volume data. 
> 
> Thanks in advance. 
> 
> Sang-Young Kim
> 
> Postdoctoral Research Fellow
> Department of Radiology, University of Pittsburgh 
>   
> 
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] CIFTI files do not contain volume data

2017-09-18 Thread Sang-Young Kim
Dear HCP experts:

I’m trying to compare myelin map between two groups using FSL Palm. I generated 
each subject’s myelin map using the HCP pipelines scripts (e.g., PreFreeSurfer, 
FreeSurfer, and PostFreeSurfer pipeline). Before running FSL Palm, I have 
merged each subject myelin map using following command: wb_command 
-cifti-merge. 

Then, to separate a CIFTI file into GIFTI and NIFTI, I ran the following 
command:
wb_command -cifti-separate data.dscalar.nii COLUMN -volume-all data_sub.nii 
-metric CORTEX_LEFT data_L.func.gii -metric CORTEX_RIGHT data_R.func.gii

When I run above command, I get error message as below:
ERROR: specified file and direction does not contain any volume data

So I checked the information of both merged myelin map data and each subject 
data using wb_command -file-information. 
I found that the CIFTI file really does not contain any volume data (Please see 
below).

Name: data.dscalar.nii
Type: Connectivity - Dense Scalar
Structure:CortexLeft CortexRight 
Data Size:4.28 Megabytes
Maps to Surface:  true
Maps to Volume:   false
Maps with LabelTable: false
Maps with Palette:true
All Map Palettes Equal:   true
Map Interval Units:   NIFTI_UNITS_UNKNOWN
Number of Maps:   18
Number of Rows:   59412
Number of Columns:18
Volume Dim[0]:0
Volume Dim[1]:0
Volume Dim[2]:0
Palette Type: Map (Unique for each map)
CIFTI Dim[0]: 18
CIFTI Dim[1]: 59412
ALONG_ROW map type:   SCALARS
ALONG_COLUMN map type:BRAIN_MODELS
Has Volume Data:  false
CortexLeft:   29696 out of 32492 vertices
CortexRight:  29716 out of 32492 vertices

Could anyone please explain what’s wrong with data processing?
Since I didn't see any error messages when I ran the HCP pipeline, I have no 
idea why CIFTI file does not contain volume data. 

Thanks in advance. 

Sang-Young Kim

Postdoctoral Research Fellow
Department of Radiology, University of Pittsburgh 
  





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX error in fix_3_clean.m

2017-09-12 Thread Sang-Young Kim
Hello, Keith:

After making change in the script of call_matlab.sh in FIX installation folder, 
the FIX is working without the error. 
I hope someone who have the same issue with me see this post. 

I really appreciate your help. 

Best, 

Sang-Young


> On Sep 11, 2017, at 6:51 PM, Keith Jamison <kjami...@umn.edu> wrote:
> 
> In your FSL FIX installation, you can try making the same change in line #307 
> of call_matlab.sh
> 
> -Keith
> 
> On Mon, Sep 11, 2017 at 6:34 PM, Sang-Young Kim <sykim...@gmail.com 
> <mailto:sykim...@gmail.com>> wrote:
> I am running the script ${HOME}/projects/Pipelines/ICAFIX/hcp_fix. 
> 
> Is there a point to modify the script in order for working FIX?
> 
> Thanks. 
> 
> Sang-Young
> 
> 
>> On Sep 11, 2017, at 6:28 PM, Keith Jamison <kjami...@umn.edu 
>> <mailto:kjami...@umn.edu>> wrote:
>> 
>> Try changing line #381 in ReApplyFix/ReApplyFixPipeline.sh from:
>> 
>> ML_PATHS="addpath('${FSL_MATLAB_PATH}'); addpath('${FSL_FIX_CIFTIRW}');"
>> 
>> to 
>> 
>> ML_PATHS="restoredefaultpath; addpath('${FSL_MATLAB_PATH}'); 
>> addpath('${FSL_FIX_CIFTIRW}');"
>> 
>> -Keith
>> 
>> On Mon, Sep 11, 2017 at 5:54 PM, Sang-Young Kim <sykim...@gmail.com 
>> <mailto:sykim...@gmail.com>> wrote:
>> Yes, I’m using CIFTI data. This is the interpreted version. 
>> 
>> Sang-Young
>> 
>> 
>>> On Sep 11, 2017, at 5:59 PM, Glasser, Matthew <glass...@wustl.edu 
>>> <mailto:glass...@wustl.edu>> wrote:
>>> 
>>> Are you using CIFTI data?  Is this the compiled version of matlab or the
>>> interpreted version?
>>> 
>>> Peace,
>>> 
>>> Matt.
>>> 
>>> On 9/11/17, 4:19 PM, "hcp-users-boun...@humanconnectome.org 
>>> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
>>> Sang-Young Kim" <hcp-users-boun...@humanconnectome.org 
>>> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
>>> sykim...@gmail.com <mailto:sykim...@gmail.com>> wrote:
>>> 
>>>> Dear HCP experts:
>>>> 
>>>> I¹m struggling with FIX for many days. The problem is that FIX does not
>>>> generate cleaned data.
>>>> So I found error message in .fix.log file as shown below.
>>>> 
>>>> TR =
>>>> 
>>>>   1
>>>> 
>>>> Elapsed time is 2.053048 seconds.
>>>> {?Error using file_array/permute (line 10)
>>>> file_array objects can not be permuted.
>>>> 
>>>> Error in read_gifti_file>gifti_Data (line 191)
>>>>  d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);
>>>> 
>>>> Error in read_gifti_file>gifti_DataArray (line 122)
>>>>  s(1).data = gifti_Data(t,c(i),s(1).attributes);
>>>> 
>>>> Error in read_gifti_file (line 45)
>>>>  this.data{end+1} = gifti_DataArray(t,uid(i),filename);
>>>> 
>>>> Error in gifti (line 74)
>>>>          this = read_gifti_file(varargin{1},giftistruct);
>>>> 
>>>> Error in ciftiopen (line 16)
>>>> cifti = gifti([tmpname '.gii']);
>>>> 
>>>> Error in fix_3_clean (line 48)
>>>> BO=ciftiopen('Atlas.dtseries.nii',WBC);
>>>> }? 
>>>> **
>>>> 
>>>> Actually I saw many posts with the same error. But no clear solution is
>>>> provided. I believe I set the path correctly and use last version of
>>>> GIFTI toolbox. And I have tested "ciftiopen" function directly on the
>>>> matlab. It worked in matlab.
>>>> But I have no idea why above error came up while running FIX.
>>>> 
>>>> Any insights would be greatly appreciated.
>>>> 
>>>> Thanks. 
>>>> 
>>>> Sang-Young Kim, Ph.D.
>>>> 
>>>> Postdoctoral Fellow
>>>> Department of Radiology, University of Pittsburgh
>>>> ___
>>>> HCP-Users mailing list
>>>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>>>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 
> 


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX error in fix_3_clean.m

2017-09-11 Thread Sang-Young Kim
I am running the script ${HOME}/projects/Pipelines/ICAFIX/hcp_fix. 

Is there a point to modify the script in order for working FIX?

Thanks. 

Sang-Young


> On Sep 11, 2017, at 6:28 PM, Keith Jamison <kjami...@umn.edu> wrote:
> 
> Try changing line #381 in ReApplyFix/ReApplyFixPipeline.sh from:
> 
> ML_PATHS="addpath('${FSL_MATLAB_PATH}'); addpath('${FSL_FIX_CIFTIRW}');"
> 
> to 
> 
> ML_PATHS="restoredefaultpath; addpath('${FSL_MATLAB_PATH}'); 
> addpath('${FSL_FIX_CIFTIRW}');"
> 
> -Keith
> 
> On Mon, Sep 11, 2017 at 5:54 PM, Sang-Young Kim <sykim...@gmail.com 
> <mailto:sykim...@gmail.com>> wrote:
> Yes, I’m using CIFTI data. This is the interpreted version. 
> 
> Sang-Young
> 
> 
>> On Sep 11, 2017, at 5:59 PM, Glasser, Matthew <glass...@wustl.edu 
>> <mailto:glass...@wustl.edu>> wrote:
>> 
>> Are you using CIFTI data?  Is this the compiled version of matlab or the
>> interpreted version?
>> 
>> Peace,
>> 
>> Matt.
>> 
>> On 9/11/17, 4:19 PM, "hcp-users-boun...@humanconnectome.org 
>> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
>> Sang-Young Kim" <hcp-users-boun...@humanconnectome.org 
>> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
>> sykim...@gmail.com <mailto:sykim...@gmail.com>> wrote:
>> 
>>> Dear HCP experts:
>>> 
>>> I¹m struggling with FIX for many days. The problem is that FIX does not
>>> generate cleaned data.
>>> So I found error message in .fix.log file as shown below.
>>> 
>>> TR =
>>> 
>>>   1
>>> 
>>> Elapsed time is 2.053048 seconds.
>>> {?Error using file_array/permute (line 10)
>>> file_array objects can not be permuted.
>>> 
>>> Error in read_gifti_file>gifti_Data (line 191)
>>>  d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);
>>> 
>>> Error in read_gifti_file>gifti_DataArray (line 122)
>>>  s(1).data = gifti_Data(t,c(i),s(1).attributes);
>>> 
>>> Error in read_gifti_file (line 45)
>>>  this.data{end+1} = gifti_DataArray(t,uid(i),filename);
>>> 
>>> Error in gifti (line 74)
>>>  this = read_gifti_file(varargin{1},giftistruct);
>>> 
>>> Error in ciftiopen (line 16)
>>> cifti = gifti([tmpname '.gii']);
>>> 
>>> Error in fix_3_clean (line 48)
>>> BO=ciftiopen('Atlas.dtseries.nii',WBC);
>>> }? 
>>> **
>>> 
>>> Actually I saw many posts with the same error. But no clear solution is
>>> provided. I believe I set the path correctly and use last version of
>>> GIFTI toolbox. And I have tested "ciftiopen" function directly on the
>>> matlab. It worked in matlab.
>>> But I have no idea why above error came up while running FIX.
>>> 
>>> Any insights would be greatly appreciated.
>>> 
>>> Thanks. 
>>> 
>>> Sang-Young Kim, Ph.D.
>>> 
>>> Postdoctoral Fellow
>>> Department of Radiology, University of Pittsburgh
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX error in fix_3_clean.m

2017-09-11 Thread Sang-Young Kim
Yes, I’m using CIFTI data. This is the interpreted version. 

Sang-Young


> On Sep 11, 2017, at 5:59 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
> 
> Are you using CIFTI data?  Is this the compiled version of matlab or the
> interpreted version?
> 
> Peace,
> 
> Matt.
> 
> On 9/11/17, 4:19 PM, "hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
> Sang-Young Kim" <hcp-users-boun...@humanconnectome.org 
> <mailto:hcp-users-boun...@humanconnectome.org> on behalf of
> sykim...@gmail.com <mailto:sykim...@gmail.com>> wrote:
> 
>> Dear HCP experts:
>> 
>> I¹m struggling with FIX for many days. The problem is that FIX does not
>> generate cleaned data.
>> So I found error message in .fix.log file as shown below.
>> 
>> TR =
>> 
>>   1
>> 
>> Elapsed time is 2.053048 seconds.
>> {?Error using file_array/permute (line 10)
>> file_array objects can not be permuted.
>> 
>> Error in read_gifti_file>gifti_Data (line 191)
>>  d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);
>> 
>> Error in read_gifti_file>gifti_DataArray (line 122)
>>  s(1).data = gifti_Data(t,c(i),s(1).attributes);
>> 
>> Error in read_gifti_file (line 45)
>>  this.data{end+1} = gifti_DataArray(t,uid(i),filename);
>> 
>> Error in gifti (line 74)
>>  this = read_gifti_file(varargin{1},giftistruct);
>> 
>> Error in ciftiopen (line 16)
>> cifti = gifti([tmpname '.gii']);
>> 
>> Error in fix_3_clean (line 48)
>> BO=ciftiopen('Atlas.dtseries.nii',WBC);
>> }? 
>> **
>> 
>> Actually I saw many posts with the same error. But no clear solution is
>> provided. I believe I set the path correctly and use last version of
>> GIFTI toolbox. And I have tested "ciftiopen" function directly on the
>> matlab. It worked in matlab.
>> But I have no idea why above error came up while running FIX.
>> 
>> Any insights would be greatly appreciated.
>> 
>> Thanks. 
>> 
>> Sang-Young Kim, Ph.D.
>> 
>> Postdoctoral Fellow
>> Department of Radiology, University of Pittsburgh
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] FIX error in fix_3_clean.m

2017-09-11 Thread Sang-Young Kim
Dear HCP experts:

I’m struggling with FIX for many days. The problem is that FIX does not 
generate cleaned data. 
So I found error message in .fix.log file as shown below. 

TR =

1

Elapsed time is 2.053048 seconds.
{Error using file_array/permute (line 10)
file_array objects can not be permuted.

Error in read_gifti_file>gifti_Data (line 191)
   d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);

Error in read_gifti_file>gifti_DataArray (line 122)
   s(1).data = gifti_Data(t,c(i),s(1).attributes);

Error in read_gifti_file (line 45)
   this.data{end+1} = gifti_DataArray(t,uid(i),filename);

Error in gifti (line 74)
   this = read_gifti_file(varargin{1},giftistruct);

Error in ciftiopen (line 16)
cifti = gifti([tmpname '.gii']);

Error in fix_3_clean (line 48)
 BO=ciftiopen('Atlas.dtseries.nii',WBC);
} 
**

Actually I saw many posts with the same error. But no clear solution is 
provided. I believe I set the path correctly and use last version of GIFTI 
toolbox. And I have tested "ciftiopen" function directly on the matlab. It 
worked in matlab. 
But I have no idea why above error came up while running FIX. 

Any insights would be greatly appreciated. 

Thanks. 

Sang-Young Kim, Ph.D.

Postdoctoral Fellow
Department of Radiology, University of Pittsburgh
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] dMRI: Error while running "eddy_postproc.sh"

2017-08-25 Thread Sang-Young Kim
Oh, I see. Yes, the —combrain-data-flag in DiffPreprocPipeline.sh was 1 since I 
did not touch it. 

I will try to use "2". 

Thank you so much. 

Sang-Young


> On Aug 25, 2017, at 12:30 PM, Harms, Michael <mha...@wustl.edu> wrote:
> 
> 
> See the --combine-data-flag in DiffPreprocPipeline.sh
> The default value of 1 is intended for acquisitions in which you have 
> acquired the full vector table with both polarities, which isn’t how you 
> acquired your data.
> In your case, you’ll need to use a value of ‘2’ for that flag.
> 
> Cheers,
> -MH
> 
> --
> Michael Harms, Ph.D.
> 
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.Tel: 314-747-6173
> St. Louis, MO  63110Email: mha...@wustl.edu
> 
> On 8/25/17, 11:23 AM, "hcp-users-boun...@humanconnectome.org on behalf of 
> Sang-Young Kim" <hcp-users-boun...@humanconnectome.org on behalf of 
> sykim...@gmail.com> wrote:
> 
> Dear HCP users:
> 
> I have another problem for running Diffusion Preprocessing pipeline. After 
> eddy current correction, the pipeline proceeded to run eddy_postproc.sh. But 
> I got error message as follow:
> 
> **
> Traceback (most recent call last):
>  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
> line 296, in 
>overlap2)
>  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
> line 36, in main
>bvals1 = loadFile(bvals1file, 1, [-1])
>  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
> line 127, in loadFile
>raise ValueError('Wrong number of dimensions: {0}'.format(filename))
> ValueError: Wrong number of dimensions: 
> /Volumes/LaCieMac3TB/HIV_HCP_pipeline/Data/30067/Diffusion/eddy/Pos.bval
> **
> 
> Above message came up while running the command (see below) in 
> "eddy_postproc.sh".
> *
> ${globalscriptsdir}/average_bvecs.py ${eddydir}/Pos.bval 
> ${eddydir}/Pos_rotated.bvec ${eddydir}/Neg.bval ${eddydir}/Neg_rotated.bvec 
> ${datadir}/avg_data ${eddydir}/Pos_SeriesVolNum.txt 
> ${eddydir}/Neg_SeriesVolNum.txt
> *
> 
> Our dMRI data has been acquired with diffusion direction of 113 (AP 
> direction) and b0 image were acquired with opposite direction (e.g., PA) for 
> topup. So I have 1x113 bval and 3x113 bvec for DWI AP data, and 1x1 bval 
> (e.g., 0 value) and 3x1 bvec (also 0 values) for DWI PA data.
> 
> Something is wrong in bval and/or bvec file?
> 
> It will be greatly appreciated if you can help me.
> 
> Thanks in advance.
> 
> Sang-Young Kim, Ph.D.
> 
> Postdoctoral Research Fellow
> Department of Radiology, University of Pittsburgh
> 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 
> 
> 
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] dMRI: Error while running "eddy_postproc.sh"

2017-08-25 Thread Sang-Young Kim
Dear HCP users:

I have another problem for running Diffusion Preprocessing pipeline. After eddy 
current correction, the pipeline proceeded to run eddy_postproc.sh. But I got 
error message as follow:

**
Traceback (most recent call last):
  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
line 296, in 
overlap2)
  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
line 36, in main
bvals1 = loadFile(bvals1file, 1, [-1])
  File "/Users/sang-young/projects/Pipelines/global/scripts/average_bvecs.py", 
line 127, in loadFile
raise ValueError('Wrong number of dimensions: {0}'.format(filename))
ValueError: Wrong number of dimensions: 
/Volumes/LaCieMac3TB/HIV_HCP_pipeline/Data/30067/Diffusion/eddy/Pos.bval
**

Above message came up while running the command (see below) in 
"eddy_postproc.sh". 
*
${globalscriptsdir}/average_bvecs.py ${eddydir}/Pos.bval 
${eddydir}/Pos_rotated.bvec ${eddydir}/Neg.bval ${eddydir}/Neg_rotated.bvec 
${datadir}/avg_data ${eddydir}/Pos_SeriesVolNum.txt 
${eddydir}/Neg_SeriesVolNum.txt
*

Our dMRI data has been acquired with diffusion direction of 113 (AP direction) 
and b0 image were acquired with opposite direction (e.g., PA) for topup. So I 
have 1x113 bval and 3x113 bvec for DWI AP data, and 1x1 bval (e.g., 0 value) 
and 3x1 bvec (also 0 values) for DWI PA data.  

Something is wrong in bval and/or bvec file? 

It will be greatly appreciated if you can help me. 

Thanks in advance. 

Sang-Young Kim, Ph.D.

Postdoctoral Research Fellow
Department of Radiology, University of Pittsburgh



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] FIX: setting up environment variables

2017-05-24 Thread Sang-Young Kim
Dear HCP experts:I’m trying to install FSL FIX on my Mac in order for applying ICA+FIX on rfMRI data. I have installed R packages and MATLAB Runtime Componentas indicated in README text file of FSL FIX folder. And I have edited the "settings.sh" script to reflect my system setup as follow:FSL_FIX_MATLAB_ROOT=/Applications/MATLAB_R2015b.appFSL_FIX_MATLAB=${FSL_FIX_MATLAB_ROOT}/bin/matlabFSL_FIX_MCC=${FSL_FIX_MATLAB_ROOT}/bin/mccFSL_FIX_MCRROOT=/Applications/MATLAB/MATLAB_Runtime/v90FSL_FIX_CIFTIRW='/Users/sang-young/tools/CIFTI-matlab';FSL_FIX_WBC='/Applications/workbench/bin_macosx64/wb_command’;Lastly, in the compiled folder of FIX, I ran the following command to set environment variables up. ./run_fix_3_clean.sh /Applications/MATLAB/MATLAB_Runtime/v90Then, following error came up:DYLD_LIBRARY_PATH is .:/Applications/MATLAB/MATLAB_Runtime/v90/runtime/maci64:/Applications/MATLAB/MATLAB_Runtime/v90/bin/maci64:/Applications/MATLAB/MATLAB_Runtime/v90/sys/os/maci64Not enough input arguments.Error in fix_3_clean (line 19)**I have no idea how to fix this problem. Could anybody please help me for that?Without fixing this error, anyway, I have tested ICA+FIX script (e.g., IcaFixProcessingBatch.sh) to see what happen. As expected, I got error message as follow: Unable to find MATLAB Compiler Runtime(Please also see attached files).Thanks in advance. Sang-Young___HCP-Users mailing listHCP-Users@humanconnectome.orghttp://lists.humanconnectome.org/mailman/listinfo/hcp-users


hcp_fix.e18310
Description: Binary data


hcp_fix.o18310
Description: Binary data
  

Re: [HCP-Users] [Mac OSX] ERROR from cp --preserve=timestamps when running FreeSurferHiresWhite

2017-05-17 Thread Sang-Young Kim
Dear Tim:

Thank you so much for your help. I will use cp -p instead of using cp 
-preserve==timestamps. 

Sang-Young


> On May 17, 2017, at 1:13 PM, Timothy B. Brown <tbbr...@wustl.edu> wrote:
> 
> Dear Sang-Young,
> 
> I would suggest that you replace the cp --preserve=timestamps commands in the 
> scripts that are causing the problems with cp -p commands.
> 
> Until March 2016, those scripts used cp -p with the specific intent of 
> preserving timestamps on the files that were copied. In the environment we 
> currently use to run pipelines here at Washington University in St. Louis, 
> the cp -p command was failing. The cp -p command is equivalent to cp 
> --preserve=mode,ownership,timestamps. In our environment, it was not possible 
> to preserve the ownership of files when doing those copies. Thus the cp -p 
> command was failing.
> Since the actual intent of the command was only to preserve the timestamps 
> and preserving the other items (mode and ownership) was not particularly 
> important, the commands were changed to cp --preserve=timestamps (asking only 
> to preserve what is necessary). 
> We have since learned that for at least some versions of Mac OSX, the 
> --preserve= option is not supported for the cp command. However, the -p 
> option seems to still be supported in Mac OSX. 
> The situation now is that changing the command back to cp -p will not work 
> for us, and leaving it as cp --preserve=timestamps will not work for people 
> using some versions of Mac OSX.
> 
> It is my understanding that using Gnu CoreUtils on Mac OSX makes the 
> --preserve= option to the cp command available under Mac OSX. But I have no 
> experience with that, so I'm not in a position to recommend it.
> 
> I'm sorry for your frustration in dealing with this issue. Thank you for 
> asking this on the list. Hopefully, this reply will prevent some others from 
> experiencing the same frustration.
>  
>   Tim
> 
> On 05/17/2017 09:31 AM, Sang-Young Kim wrote:
>> Dear HCP users:
>> 
>> In FreeSureferPipeline script (e.g., FreeSurferHiresWhite), it looks like 
>> that cp --preserve=timestamps option does not work in Mac OSX. 
>> I have spent lots of times to fix this problem and cause a lot of 
>> frustration. 
>> 
>> My question is that whether I can use the option rsync -t to copy lh.white, 
>> lh.curv etc.. as an alternative method?
>> I’m not sure rsync -t can also be used to preserve the timestamps on the 
>> files copied. 
>> Are there any other method to fix this problem?
>> 
>> It’ll be greatly appreciated if you can give me any solution. 
>> 
>> Thanks in advance. 
>> 
>> Sang-Young
>> 
>> 
>> 
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users 
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
> 
> -- 
> Timothy B. Brown
> Business & Technology Application Analyst III
> Pipeline Developer (Human Connectome Project)
> tbbrown(at)wustl.edu
> The material in this message is private and may contain Protected Healthcare 
> Information (PHI). If you are not the intended recipient, be advised that any 
> unauthorized use, disclosure, copying or the taking of any action in reliance 
> on the contents of this information is strictly prohibited. If you have 
> received this email in error, please immediately notify the sender via 
> telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] [Mac OSX] ERROR from cp --preserve=timestamps when running FreeSurferHiresWhite

2017-05-17 Thread Sang-Young Kim
Dear HCP users:

In FreeSureferPipeline script (e.g., FreeSurferHiresWhite), it looks like that 
cp --preserve=timestamps option does not work in Mac OSX. 
I have spent lots of times to fix this problem and cause a lot of frustration. 

My question is that whether I can use the option rsync -t to copy lh.white, 
lh.curv etc.. as an alternative method?
I’m not sure rsync -t can also be used to preserve the timestamps on the files 
copied. 
Are there any other method to fix this problem?

It’ll be greatly appreciated if you can give me any solution. 

Thanks in advance. 

Sang-Young



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Error for gradient nonlinearity correction on Siemens Prisma data

2017-05-12 Thread Sang-Young Kim
Dear all:

I’m running HCP pipeline script on our Siemens Prisma data. I would like to see 
whether gradient nonlinearity on Siemens Prisma scanner would cause visible 
error on segmentation result. I got the gradient coefficient file from the 
scanner. 

If I set the variable of "GradientDistortionCoeffs" with our gradient 
coefficient file in PreFreeSurferPipeline, I get following error. 

gradunwarp-INFO: Parsing 
/Users/sang-young/projects/Pipelines/global/config/coeff_AS82_Prisma.grad for 
harmonics coeffs

Traceback (most recent call last):

  File "/sw/bin/gradient_unwarp.py", line 123, in 

grad_unwarp.run()

  File "/sw/bin/gradient_unwarp.py", line 91, in run

self.args.gradfile)

  File "/sw/lib/python2.7/site-packages/gradunwarp/core/coeffs.py", line 30, in 
get_coefficients

return get_siemens_grad(cfile)

  File "/sw/lib/python2.7/site-packages/gradunwarp/core/coeffs.py", line 194, 
in get_siemens_grad

R0_m, max_ind = grad_file_parse(gfile, txt_var_map)

  File "/sw/lib/python2.7/site-packages/gradunwarp/core/coeffs.py", line 159, 
in grad_file_parse

txt_var_map['Alpha_z'][x,y] = float(line.split()[-2])

IndexError: index 15 is out of bounds for axis 0 with size 14

***

Could someone else please help me fix the error?

Thanks in advance. 

Sang-Young  



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users