Re: [HCP-Users] A minor question regarding HCP 7T movie data

2019-06-03 Thread Keith Jamison
Hi Reza,

Your interpretation of the timing is correct. The validation segment in
each movie scan was shifted by 40-200ms for the "v2" version. This was done
in order to make that final 83 second clip begin precisely at the start of
the next TR for all 4 movie sessions.

Given HRF variability, ignoring this change will probably not adversely
impact most analyses.

-Keith


On Fri, May 31, 2019 at 9:05 PM Reza Rajimehr  wrote:

> Hi,
>
> The HCP S1200 Reference Manual says that there are two versions of 4 movie
> files. Some subjects have been scanned with one version, and the remaining
> subjects have been scanned with another version. Version 2 includes these
> changes:
>
> 7T_MOVIE1_CC1_v2 remains unchanged.
>
> 7T_ MOVIE2_HO1_v2 removed 1 frame of rest before the validation clip.
>
> 7T_MOVIE3_CC2_v2 removed 5 frames of rest before the validation clip.
>
> 7T_MOVIE4_HO2_v2 added 4 frames of rest before the validation clip.
>
> Here *frame* is a frame of the movie, right?
>
> Assuming that the movies are ~25 frames per second, the deviations are
> between 0 and ~200 ms in each movie file. Considering the TR of one second
> and the slow hemodynamic BOLD responses, these deviations are negligible,
> and we can possibly ignore them when concatenating time-series data across
> all subjects. Would you agree?
>
> Thanks,
> Reza
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] some scans have _MSMAll_hp2000_clean.dtseries.nii but not MSMSulc cifti or _hp2000_clean.nii.gz ?

2018-10-23 Thread Keith Jamison
We recently noticed a couple of scans on s3://hcp-openaccess that are
missing the cleaned MNI volumetric nifti and clean MSMSulc cifti, even
though they have the uncleaned MNI nifti AND the cleaned MSMAll cifti.  Two
examples:

s3://hcp-openaccess/HCP_1200/204218/MNINonLinear/Results/rfMRI_REST1_LR
s3://hcp-openaccess/HCP_1200/705341/MNINonLinear/Results/rfMRI_REST1_LR

There may be others, but these are the two we ran into. The missing files
appear to be present in connectomedb, but somehow didn't make it to s3?

-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] can't find age, sex, etc in megatrawl?

2018-10-18 Thread Keith Jamison
Was there a reason for that omission?  They seem like obvious dimensions of
interest (age less so, given the narrow range).

Keith

On Oct 18, 2018 10:49 AM, "Stephen Smith"  wrote:

Hi
No I don’t think we explicitly calculated those things.
Cheers.


Stephen M. Smith,  Professor of Biomedical Engineering
Head of Analysis,   Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington,
Oxford. OX3 9 DU, UK
+44 (0) 1865 610470
st...@fmrib.ox.ac.uk
http://www.fmrib.ox.ac.uk/~steve
--

On 18 Oct 2018, at 15:04, Keith Jamison  wrote:

We just noticed that certain subject measures such as age and sex are not
included in the megatrawl results. I see in the release documentation that
these are among the covariates removed before regressing the other
quantities. Are there also results/stats somewhere from regressing netmats
against these quantities themselves?

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] can't find age, sex, etc in megatrawl?

2018-10-18 Thread Keith Jamison
We just noticed that certain subject measures such as age and sex are not
included in the megatrawl results. I see in the release documentation that
these are among the covariates removed before regressing the other
quantities. Are there also results/stats somewhere from regressing netmats
against these quantities themselves?

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] error when running -cifti-parcellate in RHEL but not macOS?

2018-10-01 Thread Keith Jamison
Oops it turns I've been trying to run this on a tightly controlled login
node that kills any user process once it eats up too much time/resources.
It works when I run it on a compute node (same OS, same binaries, same
input files). Nothing wrong with this command!

Sorry for the confusion!

-Keith

On Mon, Oct 1, 2018 at 11:27 PM Keith Jamison  wrote:

> See attached output file.
>
> -Keith
>
> On Mon, Oct 1, 2018 at 8:18 PM Timothy Coalson  wrote:
>
>> I'd probably need to see the dlabel file being used.  -cifti-parcellate
>> doesn't seem to have changed much recently.  Can you do -file-information
>> on the dlabel file and paste the result?
>>
>> Tim
>>
>>
>> On Mon, Oct 1, 2018 at 6:59 PM, Keith Jamison  wrote:
>>
>>> Whenever I try to run wb_command -cifti-parcellate on linux, it hangs
>>> for a few seconds and is then "killed":
>>>
>>> > wb_command -cifti-parcellate
>>> rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean.dtseries.nii
>>> MBtest.aparc+aseg.32k_fs_LR.dlabel.nii COLUMN
>>> rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean_aparc+aseg.ptseries.nii
>>> /home/kjamison/workbench_v1.3.2/bin_rh_linux64/wb_command: line 20:
>>> 22313 Killed  "$directory"/../exe_rh_linux64/wb_command "$@"
>>>
>>> workbench 1.3.2
>>> RHEL 6.10
>>> AMD64
>>>
>>> This same command works in macOS (10.13.3) using the same input files.
>>>
>>> Any idea what's going on?
>>>
>>> -Keith
>>>
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>
>>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] error when running -cifti-parcellate in RHEL but not macOS?

2018-10-01 Thread Keith Jamison
See attached output file.

-Keith

On Mon, Oct 1, 2018 at 8:18 PM Timothy Coalson  wrote:

> I'd probably need to see the dlabel file being used.  -cifti-parcellate
> doesn't seem to have changed much recently.  Can you do -file-information
> on the dlabel file and paste the result?
>
> Tim
>
>
> On Mon, Oct 1, 2018 at 6:59 PM, Keith Jamison  wrote:
>
>> Whenever I try to run wb_command -cifti-parcellate on linux, it hangs for
>> a few seconds and is then "killed":
>>
>> > wb_command -cifti-parcellate
>> rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean.dtseries.nii
>> MBtest.aparc+aseg.32k_fs_LR.dlabel.nii COLUMN
>> rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean_aparc+aseg.ptseries.nii
>> /home/kjamison/workbench_v1.3.2/bin_rh_linux64/wb_command: line 20: 22313
>> Killed  "$directory"/../exe_rh_linux64/wb_command "$@"
>>
>> workbench 1.3.2
>> RHEL 6.10
>> AMD64
>>
>> This same command works in macOS (10.13.3) using the same input files.
>>
>> Any idea what's going on?
>>
>> -Keith
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> wb_command -file-information MBtest4.aparc.32k_fs_LR.dlabel.nii
Name:MBtest4.aparc.32k_fs_LR.dlabel.nii
Type:CIFTI - Dense Label
Structure:   CortexLeft CortexRight 
Data Size:   237.65 Kilobytes
Maps to Surface: true
Maps to Volume:  false
Maps with LabelTable:true
Maps with Palette:   false
Number of Maps:  1
Number of Rows:  59412
Number of Columns:   1
Volume Dim[0]:   0
Volume Dim[1]:   0
Volume Dim[2]:   0
Palette Type:None
CIFTI Dim[0]:1
CIFTI Dim[1]:59412
ALONG_ROW map type:  LABELS
ALONG_COLUMN map type:   BRAIN_MODELS
Has Volume Data: false
CortexLeft:  29696 out of 32492 vertices
CortexRight: 29716 out of 32492 vertices

Map   Map Name  
  1   MBtest4_aparc   

Label table for ALL maps
   KEY   NAME   RED   GREENBLUE   ALPHA   
 0   ???  0.000   0.000   0.000   0.000   
 1   L_bankssts   0.098   0.392   0.157   1.000   
 2   L_caudalanteriorcingulate0.490   0.392   0.627   1.000   
 3   L_caudalmiddlefrontal0.392   0.098   0.000   1.000   
 4   L_corpuscallosum 0.471   0.275   0.196   1.000   
 5   L_cuneus 0.863   0.078   0.392   1.000   
 6   L_entorhinal 0.863   0.078   0.039   1.000   
 7   L_fusiform   0.706   0.863   0.549   1.000   
 8   L_inferiorparietal   0.863   0.235   0.863   1.000   
 9   L_inferiortemporal   0.706   0.157   0.471   1.000   
10   L_isthmuscingulate   0.549   0.078   0.549   1.000   
11   L_lateraloccipital   0.078   0.118   0.549   1.000   
12   L_lateralorbitofrontal   0.137   0.294   0.196   1.000   
13   L_lingual0.882   0.549   0.549   1.000   
14   L_medialorbitofrontal0.784   0.137   0.294   1.000   
15   L_middletemporal 0.627   0.392   0.196   1.000   
16   L_parahippocampal0.078   0.863   0.235   1.000   
17   L_paracentral0.235   0.863   0.235   1.000   
18   L_parsopercularis0.863   0.706   0.549   1.000   
19   L_parsorbitalis  0.078   0.392   0.196   1.000   
20   L_parstriangularis   0.863   0.235   0.078   1.000   
21   L_pericalcarine  0.471   0.392   0.235   1.000   
22   L_postcentral0.863   0.078   0.078   1.000   
23   L_posteriorcingulate 0.863   0.706   0.863   1.000   
24   L_precentral 0.235   0.078   0.863   1.000   
25   L_precuneus  0.627   0.549   0.706   1.000   
26   L_rostralanteriorcingulate   0.314   0.078   0.549   1.000   
27   L_rostralmiddlefrontal   0.294   0.196   0.490   1.000   
28   L_superiorfrontal0.078   0.863   0.627   1.000   
29   L_superiorparietal   0.078   0.706   0.549   1.000   
30   L_superiortemporal   0.549   0.863   0.863   1.000   
31   L_supramarginal  0.314   0.627   0.078   1.000   
32   L_frontalpole 

[HCP-Users] error when running -cifti-parcellate in RHEL but not macOS?

2018-10-01 Thread Keith Jamison
Whenever I try to run wb_command -cifti-parcellate on linux, it hangs for a
few seconds and is then "killed":

> wb_command -cifti-parcellate
rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean.dtseries.nii
MBtest.aparc+aseg.32k_fs_LR.dlabel.nii COLUMN
rfMRI_REST1_3.0mm_PA_Atlas_hp2000_clean_aparc+aseg.ptseries.nii
/home/kjamison/workbench_v1.3.2/bin_rh_linux64/wb_command: line 20: 22313
Killed  "$directory"/../exe_rh_linux64/wb_command "$@"

workbench 1.3.2
RHEL 6.10
AMD64

This same command works in macOS (10.13.3) using the same input files.

Any idea what's going on?

-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] TOPUP Settings

2018-08-30 Thread Keith Jamison
Note: The --UnwarpDir in PreFreeSurfer is the readout direction of the T1w
and T2w images themselves, which in this case is completely unrelated to
the plane in which the SE fieldmaps read out (which is set in
--SEUnwarpDir). fMRI and DWI are 2D and their unwarpdir are more obvious.
For the 3D T1w and T2w sequences, the readout direction in which most
distortion occurs is trickier to figure out, which is why Matt suggests
trial and error.  For HCP acquisitions it's always been "z".

-Keith

On Wed, Aug 29, 2018 at 7:02 PM, Glasser, Matthew 
wrote:

> Hi Mike,
>
> As far as I know the only sure way is with trial and error, but perhaps
> the latest dcm2niix has eliminated that.
>
> Matt.
>
> From:  on behalf of "Stevens,
> Michael" 
> Date: Wednesday, August 29, 2018 at 5:54 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] TOPUP Settings
>
> Also, I’d like to confirm some settings for the HCP scripts, as I’m
> helping some colleagues set up a new protocol that we plan to use HCP
> pipeline processing on.
>
>
>
> For spin echo-based unwarping, if HCP’s standard SE field map sequences
> are used and are acquired A>>P first, P>>A second, should the proper
> setting for UnwarpDir in the PreFreeSurfer script unwarping of the T1w
> image be ‘y’ or ‘y-‘?  And further… Shouldn’t the same UnwarpDir setting be
> used for fMRI unwarping?
>
>
>
> I set my own HCP protocols up a couple of years back and I’m straining my
> memory to recall exactly how the HCP scripts parse all this and create the
> commands that call topup.  I’m hoping to avoid some painstaking
> trial-and-error by asking.
>
>
>
> Thanks,
>
> Mike
>
>
>
> *This e-mail message, including any attachments, is for the sole use of
> the intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized review, use, disclosure, or distribution is
> prohibited. If you are not the intended recipient, or an employee or agent
> responsible for delivering the message to the intended recipient, please
> contact the sender by reply e-mail and destroy all copies of the original
> message, including any attachments. *
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Movie stimulus question

2018-08-15 Thread Keith Jamison
The movie lengths are:

MOVIE1: 921 seconds
MOVIE2: 918 seconds
MOVIE3: 915 seconds
MOVIE4: 901 seconds

These are the entire movie scan stimulus, including "REST".  Each subject's
nifti/cifti data, as well as the other resources associated with it
(WordNet, Motion Energy) matches these lengths as well.

Perhaps you are looking at MOVIE1 instead?

-Keith


On Wed, Aug 15, 2018 at 11:11 AM, Lloyd, Dan  wrote:

> Hello,
>
> Movie 4 is 921 seconds long (with the last clip ending at 901 s, followed
> by 20 sec of "rest").  The 7T cifti image series has 901 images.  Should we
> assume that the scan begins at the same time as the movie, and thus the
> movie runs 20 seconds past the end of the scan?  (For all the movies, we're
> aiming for the exact match of screen events and functional images, and
> appreciate any pointers.  In general, does the movie begin as the scan
> begins, or is there a fixed offset, and if so is it the same for all four
> movies? )
>
> Thank you!
>
> Dan Lloyd
>
>
>
> Trinity College, Connecticut, USA  06106
> 860 297 2528
>
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] HCP tractography pipeline?

2018-07-20 Thread Keith Jamison
Is there any movement on creating an "official" HCP tractography pipeline?
The diffusion-tractography branch on github hasn't been touched in 4 years.
Is there any updated thinking on what the final pipeline and options will
look like?

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] existing datasets for fmri+diffusion in children through young adults?

2018-04-27 Thread Keith Jamison
Hi HCP-ers,

Does anybody know of any existing publicly available datasets with fMRI and
diffusion that include ages, say, 8-25?  There are the 27 subjects from the
HCP Lifespan Pilot (12 of which are in this range), and I know there are a
number of large studies currently in the acquisition stage still, but are
there any other studies that have already made data publicly available?

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradient nonlinearity correction question

2018-04-09 Thread Keith Jamison
First make sure you're using the right coefficient file, copied directly
from the scanner.  The Prisma should have a file called coeff_AS82.grad, so
the one you used in your *second* command above should be correct.

Second is to be absolutely sure your input is the uncorrected volume (*_ND).

If you include some matched screenshots of the uncorrected,
offline-corrected, and scanner-corrected volumes, we can maybe help
evaluate the difference.


On Mon, Apr 9, 2018 at 4:09 AM, Kristian Loewe <k...@kristianloewe.com> wrote:

> Thanks Keith,
>
> Cropping is turned off by default in the version of dcm2niix that I'm
> using but I re-ran the conversion with "-x n" anyway. I also used
> fslreorient2std as per your suggestion. Unfortunately, the result is still
> the same. Do you have any other ideas?
>
>
> Cheers,
>
> Kristian
>
>
> Quoting Keith Jamison <kjami...@umn.edu>:
>
> Some problems can arrise if the NIFTI files are unexpectedly manipulated
>> prior gradient_unwarp. Two things to check:
>>
>> 1. dcm2nii and dcm2niix has options to perform additional processing like
>> reorienting or cropping, some of which may be enabled by default. Make
>> sure
>> those are all DISABLED. (for dcm2niix add "-x n" and for dcm2nii you can
>> add "-x N -r N"
>> 2. We also usually use "fslreorient2std  _new and then
>> gradient_unwarp.py on _new
>>
>> -Keith
>>
>>
>> On Fri, Apr 6, 2018 at 12:05 PM, Kristian Loewe <k...@kristianloewe.com>
>> wrote:
>>
>> Hi,
>>>
>>> I would like to use gradunwarp for offline gradient nonlinearity
>>> correction of some data acquired on our local Siemens scanners. I used
>>> dcm2niix to convert the dicom data to nifti format. After applying
>>> gradunwarp to a T1 image in nifti format (the one that originally has
>>> the _ND suffix), I proceeded to compare the result with the
>>> Siemens-corrected T1 image. I expected that they would look very
>>> similar but in fact they look quite different. I am wondering if this
>>> is to be expected to some degree because of differences in the
>>> correction algorithms or what else might be the reason for this. Could
>>> it be the case, for example, that the wrong center is being used for
>>> some reason?
>>>
>>> I have tried this for a T1 image acquired on a Verio and another one
>>> from a Prisma using
>>>
>>>gradient_unwarp.py T1.nii.gz T1_gdc.nii.gz siemens -g coeff_AS097.grad
>>> -n
>>>
>>> and
>>>
>>>gradient_unwarp.py T1.nii.gz T1_gdc.nii.gz siemens -g coeff_AS82.grad
>>> -n
>>>
>>> respectively.
>>>
>>> I would really appreciate any help or advice you can provide.
>>>
>>> Cheers,
>>>
>>> Kristian
>>>
>>>
>>> Quoting Keith Jamison <kjami...@umn.edu>:
>>>
>>> > FYI, when available, you can enable it on the scanner in the
>>> > "Resolution->Filter" tab with the "Distortion Correction" checkbox.
>>> It's
>>> > often used for structural scans like MPRAGE, where you will see two
>>> DICOM
>>> > folders in the output:  and _ND.  ND means "No
>>> > Distortion [Correction]".. .A very confusing choice of acronym.  You
>>> can
>>> > then compare the online corrected (not _ND) and offline using
>>> gradunwarp.
>>> >
>>> > -Keith
>>> >
>>> >
>>> > On Wed, Oct 19, 2016 at 4:44 PM, Glasser, Matthew <glass...@wustl.edu>
>>> > wrote:
>>> >
>>> >> Some do for some sequences, but because it is not uniformly applied
>>> and
>>> >> because they are likely not to use optimal interpolation algorithms,
>>> we
>>> >> prefer to do offline correction.
>>> >>
>>> >> Peace,
>>> >>
>>> >> Matt.
>>> >>
>>> >> From: <hcp-users-boun...@humanconnectome.org> on behalf of Antonin
>>> Skoch <
>>> >> a...@ikem.cz>
>>> >> Date: Wednesday, October 19, 2016 at 4:27 PM
>>> >> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>>> >> Subject: [HCP-Users] gradient nonlinearity correction question
>>> >>
>>> >> Dear experts,
>>> >>
>>> >> during the set-u

Re: [HCP-Users] gradient nonlinearity correction question

2018-04-06 Thread Keith Jamison
Some problems can arrise if the NIFTI files are unexpectedly manipulated
prior gradient_unwarp. Two things to check:

1. dcm2nii and dcm2niix has options to perform additional processing like
reorienting or cropping, some of which may be enabled by default. Make sure
those are all DISABLED. (for dcm2niix add "-x n" and for dcm2nii you can
add "-x N -r N"
2. We also usually use "fslreorient2std  _new and then
gradient_unwarp.py on _new

-Keith


On Fri, Apr 6, 2018 at 12:05 PM, Kristian Loewe <k...@kristianloewe.com>
wrote:

> Hi,
>
> I would like to use gradunwarp for offline gradient nonlinearity
> correction of some data acquired on our local Siemens scanners. I used
> dcm2niix to convert the dicom data to nifti format. After applying
> gradunwarp to a T1 image in nifti format (the one that originally has
> the _ND suffix), I proceeded to compare the result with the
> Siemens-corrected T1 image. I expected that they would look very
> similar but in fact they look quite different. I am wondering if this
> is to be expected to some degree because of differences in the
> correction algorithms or what else might be the reason for this. Could
> it be the case, for example, that the wrong center is being used for
> some reason?
>
> I have tried this for a T1 image acquired on a Verio and another one
> from a Prisma using
>
>gradient_unwarp.py T1.nii.gz T1_gdc.nii.gz siemens -g coeff_AS097.grad
> -n
>
> and
>
>gradient_unwarp.py T1.nii.gz T1_gdc.nii.gz siemens -g coeff_AS82.grad -n
>
> respectively.
>
> I would really appreciate any help or advice you can provide.
>
> Cheers,
>
> Kristian
>
>
> Quoting Keith Jamison <kjami...@umn.edu>:
>
> > FYI, when available, you can enable it on the scanner in the
> > "Resolution->Filter" tab with the "Distortion Correction" checkbox.  It's
> > often used for structural scans like MPRAGE, where you will see two DICOM
> > folders in the output:  and _ND.  ND means "No
> > Distortion [Correction]".. .A very confusing choice of acronym.  You can
> > then compare the online corrected (not _ND) and offline using gradunwarp.
> >
> > -Keith
> >
> >
> > On Wed, Oct 19, 2016 at 4:44 PM, Glasser, Matthew <glass...@wustl.edu>
> > wrote:
> >
> >> Some do for some sequences, but because it is not uniformly applied and
> >> because they are likely not to use optimal interpolation algorithms, we
> >> prefer to do offline correction.
> >>
> >> Peace,
> >>
> >> Matt.
> >>
> >> From: <hcp-users-boun...@humanconnectome.org> on behalf of Antonin
> Skoch <
> >> a...@ikem.cz>
> >> Date: Wednesday, October 19, 2016 at 4:27 PM
> >> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
> >> Subject: [HCP-Users] gradient nonlinearity correction question
> >>
> >> Dear experts,
> >>
> >> during the set-up of gradunwarp scripts, it came to my mind, why scanner
> >> vendors standardly do not perform gradient nonlinearity correction
> directly
> >> on the scanner as part of on-line image reconstruction system (i.e. ICE
> in
> >> Siemens)?
> >>
> >> Regards,
> >>
> >> Antonin Skoch
> >>
> >>
> >> ___
> >> HCP-Users mailing list
> >> HCP-Users@humanconnectome.org
> >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> >>
> >>
> >> --
> >>
> >> The materials in this message are private and may contain Protected
> >> Healthcare Information or other information of a sensitive nature. If
> you
> >> are not the intended recipient, be advised that any unauthorized use,
> >> disclosure, copying or the taking of any action in reliance on the
> contents
> >> of this information is strictly prohibited. If you have received this
> email
> >> in error, please immediately notify the sender via telephone or return
> mail.
> >>
> >> ___
> >> HCP-Users mailing list
> >> HCP-Users@humanconnectome.org
> >> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> >>
> >
> > ___
> > HCP-Users mailing list
> > HCP-Users@humanconnectome.org
> > http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Intensity Normalization 3_22

2018-03-29 Thread Keith Jamison
The command you ran locally is using the Jacobian as the bias field, which
is incorrect, and the "-div Jacobian -mul Jacobian" is just cancelling out
any effect (output has same bias as input fMRI). It should instead be "-div
BiasField.2 -mul Jacobian_MNI.2".

That said, your original output looks like the bias field was incorrectly
estimated. Did you use --biascorrection=SEBASED in your call to
GenericfMRIVolumeProcessingPipeline? If you used --biascorrection=LEGACY
(might be the default?) you may also want to check if your
MNINonLinear/T1w_restore and MNINonLinear/T2w_restore look properly bias
corrected.

-Keith

On Thu, Mar 29, 2018 at 11:55 AM, Glasser, Matthew 
wrote:

> I think that might be an old version of the pipelines.  If you run on the
> latest version is it better?
>
> Peace,
>
> Matt.
>
> From:  on behalf of "Sanchez, Juan
> (NYSPI)" 
> Date: Thursday, March 29, 2018 at 10:23 AM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] Intensity Normalization 3_22
>
> Dear all
>
> We are using the 3_22 Pipelines to process out data.
>
> We noticed that the processed fMRI results had an unusual intensity
> inhomogeneity for ALL of our runs. (first attachment)
>
> We found that the error occurued during intensity normalization
>
>  Here:
>
> ${FSLDIR}/bin/fslmaths ${InputfMRI} $biascom $jacobiancom -mas
> ${BrainMask} -mas ${InputfMRI}_mask -thr 0 -ing 1 ${OutputfMRI} -odt
> float
>
>
> I copied the relevant files and ran the fslmaths command localy
>
> (InputfMRI = Task_fMRI_emomo_1)
>
> fslmaths Task_fMRI_emomo_1_nonlin.nii.gz -div Jacobian_MNI.2.nii.gz  -mul
> Jacobian_MNI.2.nii.gz  -mas Task_fMRI_emomo_1_nonlin_mask.nii.gz -thr 0
> -ing 1000 output -odt float
> The output (second attachment) looked correct.
>
>
> I have tried to replicate the error and have not been able to
> Can anyone suggest a possible explenation?
> Thanks
> J
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Cleaning up intermediate files from the minimal pre-processing pipelines

2018-02-22 Thread Keith Jamison
This is not an official HCP answer, but I always delete the following after
functional preprocessing:

$subj/rfMRI_REST1_LR/MotionMatrices/MAT*.nii.gz
$subj/rfMRI_REST1_LR/OneStepResampling/prevols/
$subj/rfMRI_REST1_LR/OneStepResampling/postvols/

MotionMatrices/*.nii.gz alone accounts for nearly 20 GB of the ~30 GB for
each 15 minute scan.

-Keith

On Wed, Feb 21, 2018 at 9:48 PM, Glasser, Matthew 
wrote:

> Yes that is particularly true when using the latest version of the
> pipelines.  There are also files in T2w and T1w that could be deleted, but
> will not save as much space as Mike’s suggestion.
>
> Peace,
>
> Matt.
>
> From: "Harms, Michael" 
> Date: Wednesday, February 21, 2018 at 12:18 PM
> To: "Cook, Philip" , "
> hcp-users@humanconnectome.org" 
> Cc: Matt Glasser 
> Subject: Re: [HCP-Users] Cleaning up intermediate files from the minimal
> pre-processing pipelines
>
>
>
> Hi,
>
> While the documentation is overall very good, I don’t know if I’d rely on
> that pdf for a detailed list of all the files that we recommend “keeping”.
> For that, you could download and unpack the packages for a subject with
> complete data (e.g., 100307), and see what you all get.
>
>
>
> As a relatively simpler clean-up, I **think** that if you keep the entire
> contents of anything in $subj/T1w and $subj/MNINonLinear that you’ll have
> most of what you need for any further downstream processing, while
> achieving substantial space savings.  i.e., Most of the intermediates in
> the fMRI processing end up in the $subj/$task directories, and I think that
> any that have been deemed important (e.g., .native.func.gii) have been
> copied to $subj/MNINonLinear/Results/$task.  @Matt: Can you confirm that?
>
>
>
> e.g,. For a subject from the HCP-Young Adult study, the output from the
> MPP of a single REST run (e.g., $subj/MNINonLinear/Results/rfMRI_REST1_LR)
> is about 3.7 GB, whereas the contents of $subj/rfMRI_REST1_LR are 28 GB).
>
>
>
> Cheers,
>
> -MH
>
>
>
> --
>
> Michael Harms, Ph.D.
>
> ---
>
> Conte Center for the Neuroscience of Mental Disorders
>
> Washington University School of Medicine
>
> Department of Psychiatry, Box 8134
>
> 660 South Euclid Ave
> .
> Tel: 314-747-6173 <(314)%20747-6173>
>
> St. Louis, MO  63110  Email:
> mha...@wustl.edu
>
>
>
> *From: * on behalf of "Cook,
> Philip" 
> *Date: *Wednesday, February 21, 2018 at 11:49 AM
> *To: *"hcp-users@humanconnectome.org" 
> *Subject: *[HCP-Users] Cleaning up intermediate files from the minimal
> pre-processing pipelines
>
>
>
> Hi,
>
>
>
> I am trying to reduce disk usage after running the HCP minimal
> pre-processing pipelines. I would like to clean up intermediate files but
> retain things needed for ongoing analysis. As a reference I have found a
> list of file names in
>
>
>
> WU-Minn HCP 900 Subjects Data Release: Reference Manual
> Appendix III - File Names and Directory Structure for 900 Subjects Data
> https://www.humanconnectome.org/storage/app/media/
> documentation/s900/HCP_S900_Release_Appendix_III.pdf
>
>
>
> I would like to retain these and clean up the remainder of the output. Are
> there any scripts available to help with this?
>
>
>
>
>
> Thanks
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Myelin Maps

2017-10-27 Thread Keith Jamison
wb_command -file-information  -only-map-names

So something like
[~,result]=system('wb_command -file-information
S1200.All.MyelinMap_BC_MSMAll.32k_fs_LR.dscalar.nii -only-map-names');
result=textscan(result,'%s');
subjects=regexprep(result{1},'_MyelinMap','');

-Keith



On Fri, Oct 27, 2017 at 8:20 AM, Claude Bajada 
wrote:

> Thanks very much Jenn,
>
>
> Found them and downloaded them. I just have one more question. I am now
> trying to do some analysis on a subset of the individual myelin maps
> (S1200.All.MyelinMap...). I have loaded the data into matlab but this just
> gives a [vertex x participant] matrix but none of the keys
>
>
> Is there a way to seprate the 4D cifti into individual files? Or
> alternatively extract the order or participant number so that I can index
> the individual maps in matlab?
>
>
> Cheers,
>
> Claude
>
> On 25.10.2017 21:38, Elam, Jennifer wrote:
>
> Hi Claude,
>
> The S1200 group average myelin maps are part of the 1200 Subjects Group
> Average Data download here: https://db.humanconnectome.org/data/
> projects/HCP_1200. Individual subject myelin maps are also available in
> that dataset in the S1200.All.MyelinMap_BC_MSMAll.32k_fs_LR.dscalar.nii
> file (individual maps can be toggled through as Maps within the layer in
> Workbench.  In fact, there's a Workbench scene in the included *.scene
> file that is already set up to look at/compare both the group average and
> individual myelin maps and a PDF Workbench tutorial for the whole dataset.
>
>
> Best,
>
> Jenn
>
>
> Jennifer Elam, Ph.D.
> Scientific Outreach, Human Connectome Project
> Washington University School of Medicine
> Department of Neuroscience, Box 8108
> 660 South Euclid Avenue
> St. Louis, MO 63110
> 314-362-9387
> e...@wustl.edu
> www.humanconnectome.org
>
> --
> *From:* hcp-users-boun...@humanconnectome.org  humanconnectome.org>  on behalf of
> Bajada, Claude Julien  
> *Sent:* Wednesday, October 25, 2017 2:12:52 PM
> *To:* hcp-users@humanconnectome.org
> *Subject:* [HCP-Users] Myelin Maps
>
> Dear all,
>
> Do group average myelin maps in the 32k MSMall space exist? And if so
> where can I download them?
>
> Claude
>
>
> 
> 
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> 
> 
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DWI in CCF Prisma protocol

2017-09-29 Thread Keith Jamison
That's right.

-Keith

On Fri, Sep 29, 2017 at 4:33 PM, Jeffrey Spielberg <
jspielb...@psych.udel.edu> wrote:

> One further question: it looks like there no separate AP/PA fieldmaps for
> dMRI in the CCF protocol, whereas there are SE fieldmaps for resting - is
> that correct?  If so, is this because the b0 images from the dMRI can be
> used for this purpose (e.g., in topup)?
>
> Best,
> Jeff
>
> --
> Jeffrey M. Spielberg, Ph.D.
> Assistant Professor, Clinical Science
> Department of Psychological and Brain Sciences
> University of Delaware
> Newark, DE 19716
>
> Office: 307 McKinly Laboratory
> Lab:Suite 405 Wolf Hall
> Phone:302.831.7078
> Email:  j...@udel.edu<mailto:j...@udel.edu>
> Website:  http://sites.udel.edu/jmsp/
>
> On Sep 27, 2017, at 3:37 PM, Keith Jamison <kjami...@umn.edu<mailto:kjami
> s...@umn.edu>> wrote:
>
> To clarify, hopefully:
>
> "Vector set 1"
> dMRI_dir98_AP = 46 b=1500, 46 b=3000, 6 b=0
> dMRI_dir98_PA = exact same as dir98_AP, but phase encode=P>>A
>
> "Vector set 2"
> dMRI_dir99_AP = 47 b=1500, 46 b=3000, 6 b=0
> dMRI_dir99_PA = exact same as dir99_AP, but phase encode=P>>A
>
> So in total, we have 93 unique directions the b=1500 shell, and 92 unique
> directions for the b=3000 shell, plus 12 interspersed b=0.  Each direction
> is acquired with phase-encode A>>P and then again with P>>A.
>
> For some added info, see attached screenshot, which is page 22 of the dMRI
> screenshot PDF distributed with the CCF protocol.
>
> -Keith
>
>
> On Wed, Sep 27, 2017 at 3:06 PM, Keith Jamison <kjami...@umn.edu kjami...@umn.edu>> wrote:
> We essentially split the ~197 direction in half, and the two halves can't
> have the exact same number of directions due to how they are stored on the
> scanner, so "part 1" is 98 directions and "part 2 is 99.  each is then
> collected both AP and PA.  FYI, each scan is actually 92 diffusion volumes
> plus 6 or 7 non-diffusion "b=0"
>
> This is probably available elsewhere under the CCF documentation, but the
> DWI scans are adapted from here:
> https://www.humanconnectome.org/study-hcp-lifespan-pilot/
> phase1b-pilot-parameters
>
> The way the sequence runs on the scanner, we set a single "maximum"
> b-value (b=3000), by which each of the diffusion vectors in the table is
> scaled.  The entries that norm to 1 are b=3000, and the vectors that norm
> to 0.707 are b=1500.  Note: 0.707 = sqrt(0.5).  For whatever reason, this
> is how the scanner handles vector magnitudes.
>
> -Keith
>
>
> On Wed, Sep 27, 2017 at 2:29 PM, Glasser, Matthew <glass...@wustl.edu
> <mailto:glass...@wustl.edu>> wrote:
> In general one wants to get as many gradient directions as possible.
> Perhaps Mike knows the answer to your other question.
>
> Matt.
>
> On 9/28/17, 2:59 AM, "hcp-users-boun...@humanconnectome.org<mailto:hcp
> -users-boun...@humanconnectome.org> on behalf of
> Jeffrey Spielberg" <hcp-users-boun...@humanconnectome.org<mailto:hcp
> -users-boun...@humanconnectome.org> on behalf of
> jspielb...@psych.udel.edu<mailto:jspielb...@psych.udel.edu>> wrote:
>
> >Hi all,
> >
> >I¹m interested in setting up a diffusion protocol similar to the CCF
> >protocol and have two questions.  First, what¹s the difference between
> >the dir98 and dir99 acquisitions (beyond having different vectors)?  It
> >looks like both sample on 2 shells, use the same b-values, and have about
> >50:50 vectors on each shell, so I¹m not clear on why both are needed.
> >
> >Second, I was playing around with the Caruyer q-space sampling tool and
> >noticed that the output differs from the vectors provided as part of the
> >CCF protocol.  Specifically, for dir98, they match for 1 shell, but the
> >vector magnitudes in the other shell have been shortened to .7071.  My
> >guess is that this is necessary to tell the scanner to use the second
> >b-value (e.g., 1500).  Is that correct?
> >
> >Best,
> >Jeff
> >
> >
> >
> >--
> >Jeffrey M. Spielberg, Ph.D.
> >Assistant Professor, Clinical Science
> >Department of Psychological and Brain Sciences
> >University of Delaware
> >Newark, DE 19716
> >
> >Office: 307 McKinly Laboratory
> >Lab:Suite 405 Wolf Hall
> >Phone:302.831.7078<tel:(302)%20831-7078>
> >Email:  j...@udel.edu<mailto:j...@udel.edu><mailto:j...@udel.edu
> <mailto:j...@udel.edu>>
> >Website:  http://sites.udel.edu/jmsp/
> >
> >
> >___
> >HCP-Users mailing list
> >HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
> >http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> 
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] DWI in CCF Prisma protocol

2017-09-27 Thread Keith Jamison
We essentially split the ~197 direction in half, and the two halves can't
have the exact same number of directions due to how they are stored on the
scanner, so "part 1" is 98 directions and "part 2 is 99.  each is then
collected both AP and PA.  FYI, each scan is actually 92 diffusion volumes
plus 6 or 7 non-diffusion "b=0"

This is probably available elsewhere under the CCF documentation, but the
DWI scans are adapted from here:
https://www.humanconnectome.org/study-hcp-lifespan-pilot/phase1b-pilot-parameters

The way the sequence runs on the scanner, we set a single "maximum" b-value
(b=3000), by which each of the diffusion vectors in the table is scaled.
The entries that norm to 1 are b=3000, and the vectors that norm to 0.707
are b=1500.  Note: 0.707 = sqrt(0.5).  For whatever reason, this is how the
scanner handles vector magnitudes.

-Keith


On Wed, Sep 27, 2017 at 2:29 PM, Glasser, Matthew 
wrote:

> In general one wants to get as many gradient directions as possible.
> Perhaps Mike knows the answer to your other question.
>
> Matt.
>
> On 9/28/17, 2:59 AM, "hcp-users-boun...@humanconnectome.org on behalf of
> Jeffrey Spielberg"  jspielb...@psych.udel.edu> wrote:
>
> >Hi all,
> >
> >I¹m interested in setting up a diffusion protocol similar to the CCF
> >protocol and have two questions.  First, what¹s the difference between
> >the dir98 and dir99 acquisitions (beyond having different vectors)?  It
> >looks like both sample on 2 shells, use the same b-values, and have about
> >50:50 vectors on each shell, so I¹m not clear on why both are needed.
> >
> >Second, I was playing around with the Caruyer q-space sampling tool and
> >noticed that the output differs from the vectors provided as part of the
> >CCF protocol.  Specifically, for dir98, they match for 1 shell, but the
> >vector magnitudes in the other shell have been shortened to .7071.  My
> >guess is that this is necessary to tell the scanner to use the second
> >b-value (e.g., 1500).  Is that correct?
> >
> >Best,
> >Jeff
> >
> >
> >
> >--
> >Jeffrey M. Spielberg, Ph.D.
> >Assistant Professor, Clinical Science
> >Department of Psychological and Brain Sciences
> >University of Delaware
> >Newark, DE 19716
> >
> >Office: 307 McKinly Laboratory
> >Lab:Suite 405 Wolf Hall
> >Phone:302.831.7078
> >Email:  j...@udel.edu
> >Website:  http://sites.udel.edu/jmsp/
> >
> >
> >___
> >HCP-Users mailing list
> >HCP-Users@humanconnectome.org
> >http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX error in fix_3_clean.m

2017-09-11 Thread Keith Jamison
In your FSL FIX installation, you can try making the same change in line
#307 of call_matlab.sh

-Keith

On Mon, Sep 11, 2017 at 6:34 PM, Sang-Young Kim <sykim...@gmail.com> wrote:

> I am running the script ${HOME}/projects/Pipelines/ICAFIX/hcp_fix.
>
> Is there a point to modify the script in order for working FIX?
>
> Thanks.
>
> Sang-Young
>
>
> On Sep 11, 2017, at 6:28 PM, Keith Jamison <kjami...@umn.edu> wrote:
>
> Try changing line #381 in ReApplyFix/ReApplyFixPipeline.sh from:
>
> ML_PATHS="addpath('${FSL_MATLAB_PATH}'); addpath('${FSL_FIX_CIFTIRW}');"
>
> to
>
> ML_PATHS="restoredefaultpath; addpath('${FSL_MATLAB_PATH}'); addpath('
> ${FSL_FIX_CIFTIRW}');"
>
> -Keith
>
> On Mon, Sep 11, 2017 at 5:54 PM, Sang-Young Kim <sykim...@gmail.com>
> wrote:
>
>> Yes, I’m using CIFTI data. This is the interpreted version.
>>
>> Sang-Young
>>
>>
>> On Sep 11, 2017, at 5:59 PM, Glasser, Matthew <glass...@wustl.edu> wrote:
>>
>> Are you using CIFTI data?  Is this the compiled version of matlab or the
>> interpreted version?
>>
>> Peace,
>>
>> Matt.
>>
>> On 9/11/17, 4:19 PM, "hcp-users-boun...@humanconnectome.org on behalf of
>> Sang-Young Kim" <hcp-users-boun...@humanconnectome.org on behalf of
>> sykim...@gmail.com> wrote:
>>
>> Dear HCP experts:
>>
>> I¹m struggling with FIX for many days. The problem is that FIX does not
>> generate cleaned data.
>> So I found error message in .fix.log file as shown below.
>> 
>> TR =
>>
>>   1
>>
>> Elapsed time is 2.053048 seconds.
>> {?Error using file_array/permute (line 10)
>> file_array objects can not be permuted.
>>
>> Error in read_gifti_file>gifti_Data (line 191)
>>  d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);
>>
>> Error in read_gifti_file>gifti_DataArray (line 122)
>>  s(1).data = gifti_Data(t,c(i),s(1).attributes);
>>
>> Error in read_gifti_file (line 45)
>>  this.data{end+1} = gifti_DataArray(t,uid(i),filename);
>>
>> Error in gifti (line 74)
>>  this = read_gifti_file(varargin{1},giftistruct);
>>
>> Error in ciftiopen (line 16)
>> cifti = gifti([tmpname '.gii']);
>>
>> Error in fix_3_clean (line 48)
>> BO=ciftiopen('Atlas.dtseries.nii',WBC);
>> }?
>> **
>>
>> Actually I saw many posts with the same error. But no clear solution is
>> provided. I believe I set the path correctly and use last version of
>> GIFTI toolbox. And I have tested "ciftiopen" function directly on the
>> matlab. It worked in matlab.
>> But I have no idea why above error came up while running FIX.
>>
>> Any insights would be greatly appreciated.
>>
>> Thanks.
>>
>> Sang-Young Kim, Ph.D.
>>
>> Postdoctoral Fellow
>> Department of Radiology, University of Pittsburgh
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] FIX error in fix_3_clean.m

2017-09-11 Thread Keith Jamison
Try changing line #381 in ReApplyFix/ReApplyFixPipeline.sh from:

ML_PATHS="addpath('${FSL_MATLAB_PATH}'); addpath('${FSL_FIX_CIFTIRW}');"

to

ML_PATHS="restoredefaultpath; addpath('${FSL_MATLAB_PATH}'); addpath('
${FSL_FIX_CIFTIRW}');"

-Keith

On Mon, Sep 11, 2017 at 5:54 PM, Sang-Young Kim  wrote:

> Yes, I’m using CIFTI data. This is the interpreted version.
>
> Sang-Young
>
>
> On Sep 11, 2017, at 5:59 PM, Glasser, Matthew  wrote:
>
> Are you using CIFTI data?  Is this the compiled version of matlab or the
> interpreted version?
>
> Peace,
>
> Matt.
>
> On 9/11/17, 4:19 PM, "hcp-users-boun...@humanconnectome.org on behalf of
> Sang-Young Kim"  sykim...@gmail.com> wrote:
>
> Dear HCP experts:
>
> I¹m struggling with FIX for many days. The problem is that FIX does not
> generate cleaned data.
> So I found error message in .fix.log file as shown below.
> 
> TR =
>
>   1
>
> Elapsed time is 2.053048 seconds.
> {?Error using file_array/permute (line 10)
> file_array objects can not be permuted.
>
> Error in read_gifti_file>gifti_Data (line 191)
>  d = permute(reshape(d,fliplr(s.Dim)),length(s.Dim):-1:1);
>
> Error in read_gifti_file>gifti_DataArray (line 122)
>  s(1).data = gifti_Data(t,c(i),s(1).attributes);
>
> Error in read_gifti_file (line 45)
>  this.data{end+1} = gifti_DataArray(t,uid(i),filename);
>
> Error in gifti (line 74)
>  this = read_gifti_file(varargin{1},giftistruct);
>
> Error in ciftiopen (line 16)
> cifti = gifti([tmpname '.gii']);
>
> Error in fix_3_clean (line 48)
> BO=ciftiopen('Atlas.dtseries.nii',WBC);
> }?
> **
>
> Actually I saw many posts with the same error. But no clear solution is
> provided. I believe I set the path correctly and use last version of
> GIFTI toolbox. And I have tested "ciftiopen" function directly on the
> matlab. It worked in matlab.
> But I have no idea why above error came up while running FIX.
>
> Any insights would be greatly appreciated.
>
> Thanks.
>
> Sang-Young Kim, Ph.D.
>
> Postdoctoral Fellow
> Department of Radiology, University of Pittsburgh
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Behavioral performance data

2017-06-29 Thread Keith Jamison
I wasn't involved in these specific experiments, but I THINK the info
you're looking for is in the TAB files:

${StudyFolder}/${SubjectID}/MNINonLinear/Results/tfMRI_{
TASK}_{RUN}/{TASK}_run*_TAB.txt

There's a row for each event (along with rows for other things you'll have
to ignore), and columns that seem to contain the reaction times

For example:
${StudyFolder}/${SubjectID}/MNINonLinear/Results/tfMRI_WM_LR/WM_run2_TAB.txt
has a column Stim.RT

${StudyFolder}/${SubjectID}/MNINonLinear/Results/tfMRI_GAMBLING_LR/GAMBLING_run2_TAB.txt
has a column QuestionMark.RT

I believe the GLM timing files in /EVs/ were all distilled from information
in the _TAB.txt file. You can cross-reference these event rows with some of
the .txt files in /EVs/ to help clarify event type, accuracy, etc.

Jenn/Greg, does this sound right, and are these task-specific TAB files
documented anywhere at the moment?

-Keith


On Thu, Jun 29, 2017 at 4:06 AM, Kristina Meyer <
kristina.me...@uni-greifswald.de> wrote:

> Hello Jenn, dear everyone,
>
> thanks for your reply and your suggestions!
> Actually, averaged reaction times and accuracy rates are not exactly what
> I am looking for, however. What I meant is the performance in each single
> trial.
> Is it possible to acquire these data? Information about the response
> latency and accuracy for each trial per person?
>
> Any help would be greatly appreciated. We are planning to model individual
> differences, so for our purposes, this trial-wise behavioral information
> would be immensely helpful.
>
> Thanks,
> Kristina
>
>
> Am Mittwoch, den 28-06-2017 um 23:23 schrieb Elam, Jennifer:
>
> Hi Kristina,
>
> Are you talking about the in-scanner performance data for the tfMRI tasks?
>
> Per run (RL or LR) in-scanner reaction time (RT) and accuracy (ACC)
> measures are available in each subject's tfMRI data in a {TASK}_Stats.csv
> file in the run specific task directory :
> ${StudyFolder}/${SubjectID}/MNINonLinear/Results/tfMRI_{
> TASK}_{RUN}/EVs/{TASK}_Stats.csv
>
> Per subject average RT and ACC measures for task variables across runs are
> available in the open access behavioral CSV that you can download in
> ConnectomeDB from the HCP S1200 Project page : https://db.humanconnectome.
> org/data/projects/HCP_1200  or from the Subject dashboard accessible
> through the "Explore Subjects" option in the HCP 1200 Subjects section of
> the multiprojects page you see upon login.
>
> Descriptions of key variables used in these data are available in:
> http://humanconnectome.org/storage/app/media/
> documentation/s1200/HCP_S1200_Release_Appendix_VI.pdf
>
> Let us know if this isn't the info you are looking for. If you haven't
> already looked, lots more information in the 1200 Subjects Reference
> Manual: http://humanconnectome.org/storage/app/media/
> documentation/s1200/HCP_S1200_Release_Reference_Manual.pdf
>
>
> Best,
>
> Jenn
>
>
> Jennifer Elam, Ph.D.
> Scientific Outreach, Human Connectome Project
> Washington University School of Medicine
> Department of Neuroscience, Box 8108
> 660 South Euclid Avenue
> St. Louis, MO 63110
> 314-362-9387
> e...@wustl.edu
> www.humanconnectome.org
>
> --
> *From:* hcp-users-boun...@humanconnectome.org  humanconnectome.org> on behalf of Kristina Meyer  greifswald.de>
> *Sent:* Wednesday, June 28, 2017 6:38:18 AM
> *To:* hcp-users@humanconnectome.org
> *Subject:* [HCP-Users] Behavioral performance data
>
> Dear everyone,
>
> is there behavioral data at single-trial level available, so that I can
> use the reaction times or accuracies in each trial of the psychometric (and
> localizer) tasks for structurual equation modelling of individual
> differences in performance?
> So far, I was somehow unable find these data or clear information - maybe
> I missed it? Thanks a lot for any guidance on this.
>
> Best wishes,
> Kristina
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] removing cross-hairs, slice labels when using wb_command -show-scene?

2017-05-02 Thread Keith Jamison
Thanks!

It might be useful to have some kind of preference override ability in
scene files for situations like this.

-Keith


On Tue, May 2, 2017 at 5:27 AM, Harwell, John <jharw...@wustl.edu> wrote:

> Hi Keith,
>
> The settings for volume cross-hairs and slice labels are only in the
> preferences file; they are not in the scene file.
>
> Both wb_view and wb_command use the same preferences so as long as you are
> on the same computer and do not change the status of crosshairs/labels,
> their appearance should be consistent.
>
> We use QSettings (look for Platform-Specifici Notes in
> http://doc.qt.io/qt-5/qsettings.html) for the wb_view/wb_command
> preferences and the location and format of the preferences are system
> dependent.  The location and name of the preferences file will vary,
> depending upon the system.  On Mac, the name is 
> ~/Library/Preferences/edu.wustl.brainvis.Caret7.plist.
> On our Ubuntu Linux, I found the preferences in ~/.config/
> brainvis.wustl.edu/Caret7.conf.
>
> John Harwell
>
> On May 2, 2017, at 8:58 AM, Keith Jamison <kjami...@umn.edu> wrote:
>
> I'm trying to save some images using wb_command -show-scene, and I can't
> figure out how to change settings for things like cross-hairs or slice
> labels.  I know I can change these in the wb_view Preferences window in the
> Volume tab, but I'm hoping to script the whole process and get consistent
> results.  So...
>
> 1. Is there a parameter in the .scene file to control these kind of
> display settings?
> 2. Where are the wb_view preferences stored?  Though not ideal, I guess my
> script could modify the preferences file before running wb_command
> -show-scene
>
> Thanks!
> Keith
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] removing cross-hairs, slice labels when using wb_command -show-scene?

2017-05-02 Thread Keith Jamison
I'm trying to save some images using wb_command -show-scene, and I can't
figure out how to change settings for things like cross-hairs or slice
labels.  I know I can change these in the wb_view Preferences window in the
Volume tab, but I'm hoping to script the whole process and get consistent
results.  So...

1. Is there a parameter in the .scene file to control these kind of display
settings?
2. Where are the wb_view preferences stored?  Though not ideal, I guess my
script could modify the preferences file before running wb_command
-show-scene

Thanks!
Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] unpacking 7T data

2017-02-16 Thread Keith Jamison
There are indeed identical redundant files in the different packages, for
the reason you suggest.  You can safely overwrite common files from one
package with another, provided you are in the same root directory when you
unzip.

-Keith

On Thu, Feb 16, 2017 at 3:32 PM, Ely, Benjamin 
wrote:

> Hi HCP team,
>
> I recently downloaded the complete 7T data (files listed below) and
> noticed that, when unpacking the zip archives, many outputs in the
> T1w/Results and MNINonLinear/Results directories are redundant and will
> prompt whether to overwrite. For example, if I unpack
> ${sub}_7T_REST_Volume_preproc.zip and then ${sub}_7T_REST_1.6mm_preproc.zip,
> the file 
> ${sub}/T1w/Results/rfMRI_REST1_7T_PA/rfMRI_REST1_7T_PA_sebased_reference.nii.gz
> (among many others) will be unpacked by both. Both versions of these
> redundant files appear to be identical and presumably were bundled into
> both archives in case people only downloaded one. Can you please confirm?
> If so, I'll use the "unzip -n" option to automatically skip
> unpacking/overwriting the redundant files.
>
> Thanks!
> -Ely
>
> ${sub}_7T_rfMRI_REST1_unproc.zip
> ${sub}_7T_rfMRI_REST2_unproc.zip
> ${sub}_7T_rfMRI_REST3_unproc.zip
> ${sub}_7T_rfMRI_REST4_unproc.zip
> ${sub}_7T_REST_1.6mm_fix.zip
> ${sub}_7T_REST_1.6mm_preproc.zip
> ${sub}_7T_REST_2mm_fix.zip
> ${sub}_7T_REST_2mm_preproc.zip
> ${sub}_7T_REST_Volume_fix.zip
> ${sub}_7T_REST_Volume_preproc.zip
> ${sub}_7T_REST_preproc_extended.zip
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Reading gifti into matlab

2017-02-10 Thread Keith Jamison
Have you tried "which gifti" in matlab to see if some other gifti package
is overriding yours?  I have a few neuroimaging toolboxes (SPM, for
example) that include their own gifti wrappers that don't seem compatible
with HCP gifti.

-Keith


On Fri, Feb 10, 2017 at 1:31 PM, Lauren N  wrote:

> Hi Matt,
>
> Thanks very much for your reply. Unfortunately, I'm now getting the same
> error message trying to read in the cifti files (after downloading the
> cifti scripts provided at the link you sent). Do you have any idea why this
> might be?
>
> Thanks,
> Lauren
>
> On Tue, Feb 7, 2017 at 4:50 PM, Glasser, Matthew 
> wrote:
>
>> If what you want to do is read the CIFTI into matlab, you can use the
>> ciftiopen tool for this (2B):
>>
>> https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ
>>
>> Peace,
>>
>> Matt.
>>
>> From:  on behalf of Lauren N <
>> lmnel...@gmail.com>
>> Date: Tuesday, February 7, 2017 at 3:30 PM
>> To: "hcp-users@humanconnectome.org" 
>> Subject: [HCP-Users] Reading gifti into matlab
>>
>> Hi all,
>>
>> I'm working with HCP data. Starting with some of the cifti dscalar.nii
>> files of individual subject myelin and task-based contrasts, I pulled out
>> specific columns from the task-based contrasts using -cifti-merge. Then, I
>> combined these individual one-column dscalar files into one cifti file with
>> 500 columns again using cifti-merge. Finally, I converted this to a metric
>> gifti using cifti-separate. The gifti looks great in wb_view. However, I'm
>> getting an error reading it into matlab with Guillaume Flandin's gifti
>> library:
>>
>> Error using read_gifti_file_standalone (line 20)
>> [GIFTI] Loading of XML file
>> AllWM14CLeft.func.gii failed.
>> Error in gifti (line 71)
>> this = read_gifti_file_standalone(varargin{1},giftistruct);
>>
>> Any help would be greatly appreciated! I'm using the 1.6 version of the
>> gifti library so I think it's up to date.
>> Thanks,
>> Lauren
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>>
>> --
>>
>> The materials in this message are private and may contain Protected
>> Healthcare Information or other information of a sensitive nature. If you
>> are not the intended recipient, be advised that any unauthorized use,
>> disclosure, copying or the taking of any action in reliance on the contents
>> of this information is strictly prohibited. If you have received this email
>> in error, please immediately notify the sender via telephone or return mail.
>>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] 7T retinotopy MRI scanning sequence

2016-12-23 Thread Keith Jamison
The files that Jenn pointed to are the stimulus MASKS, which of course is
the most important aspect of the retinotopy stimulus, but it is worth
noting that the stimulus itself is not a standard checkerboard, but a
rapidly changing collage of real-world objects, faces, etc.  You can find
more information about the stimulus here: http://kendrickkay.net/analyzePRF/

-Keith

On Fri, Dec 23, 2016 at 12:50 PM, Elam, Jennifer  wrote:

> Hi Bo Yong,
>
> Attached is the 7T Retinotopy protocol off the scanner and here is a
> summary of scan parameters:
>
>
>
> The stimulus size/diameter was 16.0 deg for HCP 7T, which is especially
> important for the retinotopy data.  This is calculated from these
> measurements:
>   101.5cm eye to screen, 28.5 for stimulus diameter
> which means
>
> >> atan(28.5/2/101.5)/pi*180*2
> ans =
>   15.9835095188501 = 16 degrees
>
> The retinotopy stimulus files are available here:
> https://db.humanconnectome.org/app/action/ChooseDownloadResources?
> project=HCP_Resources=7T_Movies=retinotopy_stimulus.zip
>
>
> Hopefully this gets you most of the information that you need. Please let
> me know what other info you need and I or Keith will dig it up.
>
>
> Best,
>
> Jenn
>
>
> Jennifer Elam, Ph.D.
> Scientific Outreach, Human Connectome Project
> Washington University School of Medicine
> Department of Neuroscience, Box 8108
> 660 South Euclid Avenue
> St. Louis, MO 63110
> 314-362-9387
> e...@wustl.edu
> www.humanconnectome.org
>
>
>
> --
> *From:* hcp-users-boun...@humanconnectome.org  humanconnectome.org> on behalf of Bo Yong Park 
> *Sent:* Thursday, December 22, 2016 11:22 PM
> *To:* hcp-users@humanconnectome.org
> *Subject:* [HCP-Users] 7T retinotopy MRI scanning sequence
>
>
> To HCP users.
>
>
>
> Hello, every HCP users.
>
>
>
> I am wondering where can I find detailed scanning parameters such as
> visual angle of eccentricity or polar angle, checkerboard segments, number
> of volumes and scanning duration etc.
>
>
>
> The data were released in the Connectome DB, but I cannot find any
> information about scanning sequences.
>
>
>
> Anyone knows about it?
>
>
>
> Sincerely,
>
> Bo-yong Park.
>
> by9...@gmail.com
>
>
>
> Windows 10용 메일 에서 보냄
>
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] 7T structural data

2016-11-22 Thread Keith Jamison
There are 1.6mm downsampled versions of the T1 and T2, which I believe are
distributed with the 7T data.  These are just for correspondence with the
7T functional data, which was 1.6mm isotropic.

-Keith

On Tue, Nov 22, 2016 at 4:40 PM, Timothy Coalson  wrote:

> When we say structural, we mean T1w and T2w images.  I think you are
> referring to diffusion images.
>
> Please don't put hcp-users-request in the email recipients, it is for
> things like unsubscribing from hcp-users or changing subscription
> preferences.  hcp-users is the only list address you should send questions
> to.
>
> Tim
>
>
> On Tue, Nov 22, 2016 at 2:52 PM, Dev vasu  gmail.com> wrote:
>
>> Dear all,
>>
>> The structural data is 1.6mm isotropic not 0.7 mm .
>>
>>
>> Thanks
>>
>> Vasudev
>>
>> On 22 November 2016 at 21:45, Glasser, Matthew 
>> wrote:
>>
>>> The structural data is 0.7mm isotropic and indeed there was not going to
>>> be enough of a benefit at 7T to collect another set.
>>>
>>> Peace,
>>>
>>> Matt.
>>>
>>> From:  on behalf of "Elam,
>>> Jennifer" 
>>> Date: Tuesday, November 22, 2016 at 10:31 AM
>>> To: Dev vasu , "<
>>> hcp-users@humanconnectome.org>" , "
>>> hcp-users-requ...@humanconnectome.org" >> tome.org>
>>> Subject: Re: [HCP-Users] 7T structural data
>>>
>>> Hi Vasudev,
>>>
>>> We did not collect 7T structural scans on any subjects because there
>>> were limited, if any, benefits over the 1.6mm 3T structural scans collected
>>> on all subjects. We decided to spend our limited 7T scan time on other
>>> scans that would add more valuable, unique data to the project.
>>>
>>>
>>> Best,
>>>
>>> Jenn
>>>
>>>
>>> Jennifer Elam, Ph.D.
>>> Scientific Outreach, Human Connectome Project
>>> Washington University School of Medicine
>>> Department of Neuroscience, Box 8108
>>> 660 South Euclid Avenue
>>> St. Louis, MO 63110
>>> 314-362-9387
>>> e...@wustl.edu
>>> www.humanconnectome.org
>>>
>>> --
>>> *From:* hcp-users-boun...@humanconnectome.org <
>>> hcp-users-boun...@humanconnectome.org> on behalf of Dev vasu <
>>> vasudevamurthy.devulapa...@gmail.com>
>>> *Sent:* Tuesday, November 22, 2016 10:27:32 AM
>>> *To:* ; hcp-users-request@humanconnect
>>> ome.org
>>> *Subject:* [HCP-Users] 7T structural data
>>>
>>> Dear Sir/madam,
>>>
>>> I would like to know if HCP has updated 7T structural scans, so far i
>>> only notice 3T 1.6mm structural data when i download any 7T subject.
>>>
>>> Kindly please update me when you have uploaded 7T structural scans.
>>>
>>> Or if possible please kindly send me some 7T structural data
>>>
>>>
>>> Thanks
>>> Vasudev
>>>
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>>
>>> --
>>>
>>> The materials in this message are private and may contain Protected
>>> Healthcare Information or other information of a sensitive nature. If you
>>> are not the intended recipient, be advised that any unauthorized use,
>>> disclosure, copying or the taking of any action in reliance on the contents
>>> of this information is strictly prohibited. If you have received this email
>>> in error, please immediately notify the sender via telephone or return mail.
>>>
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradient nonlinearity correction question

2016-10-21 Thread Keith Jamison
FYI, when available, you can enable it on the scanner in the
"Resolution->Filter" tab with the "Distortion Correction" checkbox.  It's
often used for structural scans like MPRAGE, where you will see two DICOM
folders in the output:  and _ND.  ND means "No
Distortion [Correction]".. .A very confusing choice of acronym.  You can
then compare the online corrected (not _ND) and offline using gradunwarp.

-Keith


On Wed, Oct 19, 2016 at 4:44 PM, Glasser, Matthew 
wrote:

> Some do for some sequences, but because it is not uniformly applied and
> because they are likely not to use optimal interpolation algorithms, we
> prefer to do offline correction.
>
> Peace,
>
> Matt.
>
> From:  on behalf of Antonin Skoch <
> a...@ikem.cz>
> Date: Wednesday, October 19, 2016 at 4:27 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] gradient nonlinearity correction question
>
> Dear experts,
>
> during the set-up of gradunwarp scripts, it came to my mind, why scanner
> vendors standardly do not perform gradient nonlinearity correction directly
> on the scanner as part of on-line image reconstruction system (i.e. ICE in
> Siemens)?
>
> Regards,
>
> Antonin Skoch
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] gradunwarp - "array index out of bounds" error in coef_file_parse routine

2016-10-17 Thread Keith Jamison
That is the correct fix for this problem.  It has no side-effects. Some
scanners (eg: Siemens 7T) have coefficients of even higher orders, so you
can actually increase the siemens_cas to 100 or so to accommodate  the full
range.  That value just determines the preallocation, and the matrix is
trimmed down to the maximum appropriate size after reading the file.

I think this is going to be fixed in the next release of gradunwarp (though
when that will be...?)

-Keith

On Sat, Oct 15, 2016 at 1:14 PM, Antonin Skoch  wrote:

> Dear experts,
>
> I have an issue with processing my data (acquired at Siemens Prisma 3T) by
> gradunwarp v1.0.2, downloaded from
>
> https://github.com/Washington-University/gradunwarp/releases
>
> It crashed with my coef.grad file by producing "array index out of bounds"
> error in coef_file_parse routine.
>
> I managed to get it working by increasing siemens_cas=20 in core/globals.py
>
> The corrected images look reasonable, with deformation going to maximum
> approx 1-2 mm in off-isocenter regions.
>
> Since I am not familiar with internals of gradunwarp, I would like to
> assure myself the routine with my modification works OK and there is no
> other unwanted consequence by increasing siemens_cas. Could you please
> comment on?
>
> Regards,
>
> Antonin Skoch
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hc p-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MELODIC denoising vs. released ICA-FIX datasets

2016-10-06 Thread Keith Jamison
Steve,

Have you found an obvious downside to a shorter HPF cutoff of, say, 200
seconds?  Would the HCP FIX training data still apply or would the
classifier need to be retrained?

-Keith


On Thu, Oct 6, 2016 at 12:43 PM, Xu, Junqian  wrote:

>
> > On Oct 4, 2016, at 9:03 PM, Ely, Benjamin  wrote:
> >
> > Thanks Steve, that's good to keep in mind. Our acquisition is a single
> "HCP-like" 15 minute run at MB6, 2.1mm isotropic resolution, TR=1s, AP
> phase encoding, 32-channel head coil on a 3T Skyra; hopefully that gives us
> a similar temporal profile.
>
> It’s not much about the acquisition protocol (though in the multiband era,
> MB is now related to scanner transmitter stability), but rather the scanner
> stability itself.
>
> > Sounds like I should compare our temporal stability against the HCP's -
> is there a measure you recommend?
>
> HCP Connectom Skyra scanner has quite small temporal drift and a very
> linear trend (specific to the gradient and body coil hardware
> characteristics), which a typical Skyra can’t match. To determine what
> detrending cutoff you should use for your site-specific data, you could run
> a 5-10min fBIRN phantom scan with your fMRI protocol and look at the
> scanner stability.
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] 7T retinotopy task script

2016-07-21 Thread Keith Jamison
Hi Ely,

We haven't distributed that code yet, but we probably should include that
in the next 7T data release.  The 7T tasks were written in matlab
(psychtoolbox), not eprime, and it may take some work to get them into a
distributable state.

-Keith


On Thu, Jul 21, 2016 at 5:28 PM, Ely, Benjamin 
wrote:

> Hi HCP team,
>
> I'd like to download the retinotopy eprime scripts used for the 7T HCP
> release. I was previously able to download the 3T scripts, but I can't find
> where the 7T scripts are hosted. Could you please advise?
>
> Many thanks,
> -Ely
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] S900 group average high res?

2016-06-24 Thread Keith Jamison
Is there a 164k version of the S900 group average surfaces and spec file?
I only see 32k on ConnectomeDB.

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Problem with wb_command -volume-parcel-resampling? dimension mismatch?

2015-04-23 Thread Keith Jamison
Hi HCPers

I'm running into an error when running SubcorticalProcessing.sh in the
fMRISurface pipeline.  Specifically, when it tries to run:
wb_command -volume-parcel-resampling fmri_timecourse.nii.gz
ROIs/ROIs.voxres.nii.gz ROIs/Atlas_ROIs.voxres.nii.gz 

I get an error saying volume spacing or dimension mismatch.

I confirmed that fmri_timecourse ROIs/ROIs.voxres
ROIs/Atlas_ROIs.voxres all have the same dimensions and resolution, and I
even tried using just the first volume in fmri_timecourse in case the 4th
dimension mismatch was an issue.

If I use wb_command -volume-parcel-resampling-generic it works just fine.

1. Am I misunderstanding the requirements for this command?  Or is there
something else going on?
2. Is there a downside to using -generic?

I attached a more specific console log.

Thanks!
-Keith

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
wb_command -volume-parcel-resampling \
Results/rfMRI_REST1_PA/rfMRI_REST1_PA.nii.gz \
ROIs/ROIs.1.6.nii.gz \
ROIs/Atlas_ROIs.1.6.nii.gz \
.67945744023041523418 \
Results/rfMRI_REST1_PA/rfMRI_REST1_PA_AtlasSubcortical_s1.60.nii.gz \
-fix-zeros

ERROR: volume spacing or dimension mismatch

 fslinfo ROIs/ROIs.1.6.nii.gz 
data_type  FLOAT32
dim1   113
dim2   136
dim3   113
dim4   1
datatype   16
pixdim11.60
pixdim21.60
pixdim31.60
pixdim40.00
cal_max0.
cal_min0.
file_type  NIFTI-1+

 fslinfo ROIs/Atlas_ROIs.1.6.nii.gz 
data_type  FLOAT32
dim1   113
dim2   136
dim3   113
dim4   1
datatype   16
pixdim11.60
pixdim21.60
pixdim31.60
pixdim40.00
cal_max0.
cal_min0.
file_type  NIFTI-1+

 fslinfo Results/rfMRI_REST1_PA/rfMRI_REST1_PA
data_type  FLOAT32
dim1   113
dim2   136
dim3   113
dim4   900
datatype   16
pixdim11.60
pixdim21.60
pixdim31.60
pixdim41.00
cal_max0.
cal_min0.
file_type  NIFTI-1+


Re: [HCP-Users] Problem with wb_command -volume-parcel-resampling? dimension mismatch?

2015-04-23 Thread Keith Jamison
Here's a tar with the relevant data (only 1.6MB ... truncated the fmri to a
single volume):
https://drive.google.com/folderview?id=0B-jUIqj-goJ9cVJWcGM5d05RTUkusp=sharing

-Keith


On Thu, Apr 23, 2015 at 3:25 PM, Glasser, Matthew glass...@wusm.wustl.edu
wrote:

  That seems like it shouldn’t be happening.  Tim C. might need you to
 upload that data somewhere so that he can debug.

  There isn’t any downside to using the generic command, but in your
 situation it should not be necessary.

  Peace,

  Matt.

   From: Keith Jamison kjami...@umn.edu
 Date: Thursday, April 23, 2015 at 3:21 PM
 To: HCP-Users@humanconnectome.org HCP-Users@humanconnectome.org
 Subject: [HCP-Users] Problem with wb_command -volume-parcel-resampling?
 dimension mismatch?

  Hi HCPers

  I'm running into an error when running SubcorticalProcessing.sh in the
 fMRISurface pipeline.  Specifically, when it tries to run:
  wb_command -volume-parcel-resampling fmri_timecourse.nii.gz
 ROIs/ROIs.voxres.nii.gz ROIs/Atlas_ROIs.voxres.nii.gz 

  I get an error saying volume spacing or dimension mismatch.

  I confirmed that fmri_timecourse ROIs/ROIs.voxres
 ROIs/Atlas_ROIs.voxres all have the same dimensions and resolution, and I
 even tried using just the first volume in fmri_timecourse in case the 4th
 dimension mismatch was an issue.

  If I use wb_command -volume-parcel-resampling-generic it works just fine.

  1. Am I misunderstanding the requirements for this command?  Or is there
 something else going on?
  2. Is there a downside to using -generic?

  I attached a more specific console log.

  Thanks!
  -Keith

 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users


  --

 The materials in this message are private and may contain Protected
 Healthcare Information or other information of a sensitive nature. If you
 are not the intended recipient, be advised that any unauthorized use,
 disclosure, copying or the taking of any action in reliance on the contents
 of this information is strictly prohibited. If you have received this email
 in error, please immediately notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users