[HCP-Users] Fwd: Re: Skyra - BOLD and diffusion MRI protocols

2016-12-06 Thread neuroimage analyst
Hi Matt,

32 channel head coil.

I was able to run the diffusion sequence on a phantom when TR/TE was
7000/106 ms. Since the scan time with this tr is already 20 minutes, I was
thinking to acquire just b0 with PA phase encoding and the other 64
directions 2b data+b0  with AP. will that be okay? my concern is snr at te
will not be very good and data may not be too great and may be atleast 2
reps are desired?

Thanks

Regards

-VM

On Dec 6, 2016 6:46 PM, "Glasser, Matthew"  wrote:

> What coil are you using?
>
> Peace,
>
> Matt.
>
> From:  on behalf of neuroimage
> analyst 
> Date: Tuesday, December 6, 2016 at 11:56 AM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] Skyra - BOLD and diffusion MRI protocols
>
> Hi, HCP Users and Developers.
>
> We have been trying to build a protocol for BOLD and DWI on our Skyra
> (VE11C) using the latest release of HCP pulse sequences. But we
> haven't been completely successful with what we wanted to achieve.
>
> A) BOLD: We were hoping to get a TR in the range of 750ms, res = 2mm3,
> echo spacing (ES) approx 0.65. We were able to get to echo spacing of 0.69
> and all the other parameters with MB = 8. However, the sequence doesn't run
> and it gives us "Max amplitude overflow on gradient z axis" after MB factor
> exceeds 2.  It appears to me that then we have to sacrifice TR and only run
> with MB = 2. Is there anybody with a Skyra who has able to achieve what we
> are hoping for and willing to share the protocol with us? OR if somebody
> could guide us to resolve the error of amplitude overflow.
>
> B) DWI: The idea was to have res = 1.5mm3, 2b-values at 1000 and 2500, 64
> directions with the best BW and ES achievable. The sequence runs for 7
> minutes out of total 13 minutes and gives gradient power amplifier error.
> On Michael Harms's suggestion, I adjusted flip angle to 78/160 at a TR of
> 5500 ms with excite/refocus pulse duration of 3840 and 7680. there was a
> pop up warning that RF is clipped and the maximum refocusing angle was 142
> instead of 160. I placed a 64 directions diffusion vector and ran the
> sequence under "free" mode. Again, I will appreciate if  there is anybody
> with a Skyra who has able to achieve what we are hoping for and willing to
> share the protocol with us, along with the diffusion vector sets? OR if
> somebody could guide us to resolve the error.
>
> Thank you.
>
> Regards
>
> --VM
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Skyra - BOLD and diffusion MRI protocols

2016-12-06 Thread Glasser, Matthew
What coil are you using?

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of neuroimage analyst 
mailto:neuroimage.anal...@gmail.com>>
Date: Tuesday, December 6, 2016 at 11:56 AM
To: "hcp-users@humanconnectome.org" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Skyra - BOLD and diffusion MRI protocols

Hi, HCP Users and Developers.

We have been trying to build a protocol for BOLD and DWI on our Skyra (VE11C) 
using the latest release of HCP pulse sequences. But we haven't been completely 
successful with what we wanted to achieve.

A) BOLD: We were hoping to get a TR in the range of 750ms, res = 2mm3, echo 
spacing (ES) approx 0.65. We were able to get to echo spacing of 0.69 and all 
the other parameters with MB = 8. However, the sequence doesn't run and it 
gives us "Max amplitude overflow on gradient z axis" after MB factor exceeds 2. 
 It appears to me that then we have to sacrifice TR and only run with MB = 2. 
Is there anybody with a Skyra who has able to achieve what we are hoping for 
and willing to share the protocol with us? OR if somebody could guide us to 
resolve the error of amplitude overflow.

B) DWI: The idea was to have res = 1.5mm3, 2b-values at 1000 and 2500, 64 
directions with the best BW and ES achievable. The sequence runs for 7 minutes 
out of total 13 minutes and gives gradient power amplifier error. On Michael 
Harms's suggestion, I adjusted flip angle to 78/160 at a TR of 5500 ms with 
excite/refocus pulse duration of 3840 and 7680. there was a pop up warning that 
RF is clipped and the maximum refocusing angle was 142 instead of 160. I placed 
a 64 directions diffusion vector and ran the sequence under "free" mode. Again, 
I will appreciate if  there is anybody with a Skyra who has able to achieve 
what we are hoping for and willing to share the protocol with us, along with 
the diffusion vector sets? OR if somebody could guide us to resolve the error.

Thank you.

Regards

--VM


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Harms, Michael

Hi,
We don’t have separate lists of all the files in each package for every
subject.  This seems to be a bigger request that we initially appreciated,
so the Informatics group will likely have to give some thought if this is
something they want to support.

cheers,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 12/6/16, 3:46 PM, "hcp-users-boun...@humanconnectome.org on behalf of
Yaroslav Halchenko"  wrote:

Thank you Michael and Jennifer again!

On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>To add to what Mike Harms just wrote, it still sounds like you are
>thinking of the packages as data bundles for groups of subjects.

yes -- indeed -- it was probably the main reason for a bit of disconnect
in our
dialog since I have misused the term of a "package" which is indeed
provided
per each subject separately and "bundle" would have been a proper term.
I will try to use correct terms (package vs bundle) appropriately from now
on
;)


On Tue, 06 Dec 2016, Elam, Jennifer wrote:
> Subjects
>don't "belong" to a package because the packages don't contain groups
>of
>subjects. Instead, the packages are separate and specific to a
>particular
>subject ID, modality, processing level, and smoothing level (for
>fMRI).

On Tue, 06 Dec 2016, Harms, Michael wrote:
> To reiterate something Mike Hodge mentioned, any given package only
> contains files for one subject.  When interacting with ConnectomDB, the
> subjects to download, and the particular packages to download, are two
> separate and distinct choices.

choices (as in the interface) are indeed distinct but not entirely
independent.

With my cleared up understanding of the terminology, question remains -- is
there a list of files per each package (per each subject) which you use to
generate packages from individual files?  or that is the information
contained
within XNAT schema used underneath (so it could may be queried instead)?

>I think the confusion is that individual subject packages can be
>queued
>for download in groups in ConnectomeDB. We could provide you with
>lists of
>subjects for each searchable group in ConnectomeDB, (e.g. U100, 7T
>data
>available, MEG data available) with which some users may have queued
>all
>subjects in a group for download of specific data packages for their
>analysis.

yes. That would be great.

>Another slight complication is that there is a little bit of overlap
>in
>the data in the packages themselves so that there is more than one
>package
>associated with some of the files. This was done so that users would
>have
>everything they need to do a certain analysis from a particular
>package. For example, the FIX and FIX-extended packages include a few
>of
>the same output files, although the FIX-extended package has a lot
>more of
>the FIX intermediate files to allow users to evaluate how FIX worked
>for a
>particular subject.

yeap.   that is why having lists of those files per each package could be
of
benefit.  Could as well be located e.g. under packages/ "subfolder"
within each subject folder on S3 bucket, e.g. for

./100307/analysis_s12/100307_3T_tfMRI_EMOTION_analysis_s12.zip

there could be files like

./100307/packages/3T_tfMRI_EMOTION_analysis_s12.list

containing

100307/.xdlm/100307_3T_tfMRI_EMOTION_analysis_s12.json
100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.f
eat/100307_tfMRI_EMOTION_level2_hp200_s12.dscalar.nii
100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.f
eat/Contrasts.txt
100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.f
eat/GrayordinatesStats/cope1.feat/cope1.dtseries.nii
100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.f
eat/GrayordinatesStats/cope1.feat/logfile
100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.f
eat/GrayordinatesStats/cope1.feat/mask.dtseries.nii
...
... (excluded folders since they provide no additional information) ...

do you see it feasible?

Thank you in advance for your informative replies!
--
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, b

Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Yaroslav Halchenko
Thank you Michael and Jennifer again!

On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>To add to what Mike Harms just wrote, it still sounds like you are
>thinking of the packages as data bundles for groups of subjects. 

yes -- indeed -- it was probably the main reason for a bit of disconnect in our
dialog since I have misused the term of a "package" which is indeed provided
per each subject separately and "bundle" would have been a proper term.
I will try to use correct terms (package vs bundle) appropriately from now on
;)


On Tue, 06 Dec 2016, Elam, Jennifer wrote:
> Subjects
>don't "belong" to a package because the packages don't contain groups of
>subjects. Instead, the packages are separate and specific to a particular
>subject ID, modality, processing level, and smoothing level (for fMRI).

On Tue, 06 Dec 2016, Harms, Michael wrote:
> To reiterate something Mike Hodge mentioned, any given package only
> contains files for one subject.  When interacting with ConnectomDB, the
> subjects to download, and the particular packages to download, are two
> separate and distinct choices.

choices (as in the interface) are indeed distinct but not entirely
independent.

With my cleared up understanding of the terminology, question remains -- is
there a list of files per each package (per each subject) which you use to
generate packages from individual files?  or that is the information contained
within XNAT schema used underneath (so it could may be queried instead)?

>I think the confusion is that individual subject packages can be queued
>for download in groups in ConnectomeDB. We could provide you with lists of
>subjects for each searchable group in ConnectomeDB, (e.g. U100, 7T data
>available, MEG data available) with which some users may have queued all
>subjects in a group for download of specific data packages for their
>analysis.

yes. That would be great.

>Another slight complication is that there is a little bit of overlap in
>the data in the packages themselves so that there is more than one package
>associated with some of the files. This was done so that users would have
>everything they need to do a certain analysis from a particular
>package. For example, the FIX and FIX-extended packages include a few of
>the same output files, although the FIX-extended package has a lot more of
>the FIX intermediate files to allow users to evaluate how FIX worked for a
>particular subject.

yeap.   that is why having lists of those files per each package could be of
benefit.  Could as well be located e.g. under packages/ "subfolder"
within each subject folder on S3 bucket, e.g. for

./100307/analysis_s12/100307_3T_tfMRI_EMOTION_analysis_s12.zip

there could be files like

./100307/packages/3T_tfMRI_EMOTION_analysis_s12.list

containing

100307/.xdlm/100307_3T_tfMRI_EMOTION_analysis_s12.json

100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.feat/100307_tfMRI_EMOTION_level2_hp200_s12.dscalar.nii

100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.feat/Contrasts.txt

100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.feat/GrayordinatesStats/cope1.feat/cope1.dtseries.nii

100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.feat/GrayordinatesStats/cope1.feat/logfile

100307/MNINonLinear/Results/tfMRI_EMOTION/tfMRI_EMOTION_hp200_s12_level2.feat/GrayordinatesStats/cope1.feat/mask.dtseries.nii
...
... (excluded folders since they provide no additional information) ...

do you see it feasible?

Thank you in advance for your informative replies!
-- 
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Elam, Jennifer
Hi Yaroslav,

To add to what Mike Harms just wrote, it still sounds like you are thinking of 
the packages as data bundles for groups of subjects. Subjects don't "belong" to 
a package because the packages don't contain groups of subjects. Instead, the 
packages are separate and specific to a particular subject ID, modality, 
processing level, and smoothing level (for fMRI).


I think the confusion is that individual subject packages can be queued for 
download in groups in ConnectomeDB. We could provide you with lists of subjects 
for each searchable group in ConnectomeDB, (e.g. U100, 7T data available, MEG 
data available) with which some users may have queued all subjects in a group 
for download of specific data packages for their analysis.


Another slight complication is that there is a little bit of overlap in the 
data in the packages themselves so that there is more than one package 
associated with some of the files. This was done so that users would have 
everything they need to do a certain analysis from a particular package. For 
example, the FIX and FIX-extended packages include a few of the same output 
files, although the FIX-extended package has a lot more of the FIX intermediate 
files to allow users to evaluate how FIX worked for a particular subject.


Best,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org



From: hcp-users-boun...@humanconnectome.org 
 on behalf of Yaroslav Halchenko 

Sent: Tuesday, December 6, 2016 1:47:19 PM
To: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] (files) listing for file bundles

On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>A listing of the by subject unpacked files available, organized by
>modality and processing level, are available in Appendix 3 of the
>Reference Manual.

>The files are listed there as they unpack into a standard directory
>structure. They are not organized by ConnectomeDB packages, per se,
>because the listing is to be also applicable to users of Connectome in a
>Box and Amazon S3. If you really need a listing of the package contents
>themselves, we (Mike Hodge) can provide that separately.


On Tue, 06 Dec 2016, Hodge, Michael wrote:
> Yaroslav,

> Separate packages are created for each subject.  The list I sent just listed 
> packages for a couple of subjects to show you the files contained in the 
> packages by example.  There aren't packages that correspond to the unrelated 
> groups.  Each subject in the groups has a set of packages.  I could repeat 
> the unzip search across all subjects if you wish, but it would be a very 
> large file.


Dear Jennifer and Michael,

Thank you for your replies!

Let me may be describe my target use-case and why I was asking about
packages, which may be would make situation a bit clearer.

s3 HCP bucket provides convenient access to the dataset's individual files
but they lack annotation on what package(s) (as shipped from db.) any
particular file possibly belongs to.  But such "packaging" is important
meta-information since many folks analyze data from a particular "package".

In datalad project we would like to provide access to data from HCP bucket, but
also would like  to allow users to specify "packages" -- as to which specific
sub-datasets (e.g. not all subjects when not all subjects belong to a
package) to install and which files to download.  So it would look like
following if we assume that 7T_MOVIE_2mm_preproc  is a name of an example
package which contains a subset of subjects with 7T movie "task" data.

datalad search 7T_MOVIE_2mm_preproc | xargs datalad install

to install those subjects' datasets (git-annex repositories without actual data
by default), and then (hypothetical API)

datalad get -r --annex-meta 7T_MOVIE_2mm_preproc

to actually fetch data files present in the  7T_MOVIE_2mm_preproc  package.

Similarly, they could later run

Since, I guess, you are composing those "packages" somehow already from a list
of rules/files, I just thought that may be those could be shared, so we could
embed that information in our annex HCP repositories and to not incur any
additional "development/setup/maintenance cost" (as to dumping listing of
generated .zip files).  Then, if just plain .txt files with listings (unlike
formatted pdfs -- easily machine readable), then people could also easily
come up with their 1 line shell scripts to fetch corresponding to packages
files from s3.

So -- overall -- listings produced by Michael would work but I wondered if we
could avoid (re)creating them and possibly make them even better for
machine-parsing (e.g. one .txt file per each package which would include paths
for files for all the subjects in that package).

Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Harms, Michael

Hi Yaroslav,
To reiterate something Mike Hodge mentioned, any given package only
contains files for one subject.  When interacting with ConnectomDB, the
subjects to download, and the particular packages to download, are two
separate and distinct choices.

cheers,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 12/6/16, 1:47 PM, "hcp-users-boun...@humanconnectome.org on behalf of
Yaroslav Halchenko"  wrote:

On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>A listing of the by subject unpacked files available, organized by
>modality and processing level, are available in Appendix 3 of the
>Reference Manual.

>The files are listed there as they unpack into a standard directory
>structure. They are not organized by ConnectomeDB packages, per se,
>because the listing is to be also applicable to users of Connectome
>in a
>Box and Amazon S3. If you really need a listing of the package
>contents
>themselves, we (Mike Hodge) can provide that separately.


On Tue, 06 Dec 2016, Hodge, Michael wrote:
> Yaroslav,

> Separate packages are created for each subject.  The list I sent just
>listed packages for a couple of subjects to show you the files contained
>in the packages by example.  There aren't packages that correspond to the
>unrelated groups.  Each subject in the groups has a set of packages.  I
>could repeat the unzip search across all subjects if you wish, but it
>would be a very large file.


Dear Jennifer and Michael,

Thank you for your replies!

Let me may be describe my target use-case and why I was asking about
packages, which may be would make situation a bit clearer.

s3 HCP bucket provides convenient access to the dataset's individual files
but they lack annotation on what package(s) (as shipped from db.) any
particular file possibly belongs to.  But such "packaging" is important
meta-information since many folks analyze data from a particular "package".

In datalad project we would like to provide access to data from HCP
bucket, but
also would like  to allow users to specify "packages" -- as to which
specific
sub-datasets (e.g. not all subjects when not all subjects belong to a
package) to install and which files to download.  So it would look like
following if we assume that 7T_MOVIE_2mm_preproc  is a name of an example
package which contains a subset of subjects with 7T movie "task" data.

datalad search 7T_MOVIE_2mm_preproc | xargs datalad install

to install those subjects' datasets (git-annex repositories without actual
data
by default), and then (hypothetical API)

datalad get -r --annex-meta 7T_MOVIE_2mm_preproc

to actually fetch data files present in the  7T_MOVIE_2mm_preproc  package.

Similarly, they could later run

Since, I guess, you are composing those "packages" somehow already from a
list
of rules/files, I just thought that may be those could be shared, so we
could
embed that information in our annex HCP repositories and to not incur any
additional "development/setup/maintenance cost" (as to dumping listing of
generated .zip files).  Then, if just plain .txt files with listings
(unlike
formatted pdfs -- easily machine readable), then people could also easily
come up with their 1 line shell scripts to fetch corresponding to packages
files from s3.

So -- overall -- listings produced by Michael would work but I wondered if
we
could avoid (re)creating them and possibly make them even better for
machine-parsing (e.g. one .txt file per each package which would include
paths
for files for all the subjects in that package).

BTW --   7T_MOVIE_2mm_preproc   set of files is not yet on S3 bucket.  When
will that portion be uploaded?

--
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Yaroslav Halchenko
On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>A listing of the by subject unpacked files available, organized by
>modality and processing level, are available in Appendix 3 of the
>Reference Manual.

>The files are listed there as they unpack into a standard directory
>structure. They are not organized by ConnectomeDB packages, per se,
>because the listing is to be also applicable to users of Connectome in a
>Box and Amazon S3. If you really need a listing of the package contents
>themselves, we (Mike Hodge) can provide that separately.


On Tue, 06 Dec 2016, Hodge, Michael wrote:
> Yaroslav,

> Separate packages are created for each subject.  The list I sent just listed 
> packages for a couple of subjects to show you the files contained in the 
> packages by example.  There aren't packages that correspond to the unrelated 
> groups.  Each subject in the groups has a set of packages.  I could repeat 
> the unzip search across all subjects if you wish, but it would be a very 
> large file.


Dear Jennifer and Michael,

Thank you for your replies!

Let me may be describe my target use-case and why I was asking about
packages, which may be would make situation a bit clearer.

s3 HCP bucket provides convenient access to the dataset's individual files
but they lack annotation on what package(s) (as shipped from db.) any
particular file possibly belongs to.  But such "packaging" is important
meta-information since many folks analyze data from a particular "package".

In datalad project we would like to provide access to data from HCP bucket, but
also would like  to allow users to specify "packages" -- as to which specific
sub-datasets (e.g. not all subjects when not all subjects belong to a
package) to install and which files to download.  So it would look like
following if we assume that 7T_MOVIE_2mm_preproc  is a name of an example
package which contains a subset of subjects with 7T movie "task" data. 

datalad search 7T_MOVIE_2mm_preproc | xargs datalad install

to install those subjects' datasets (git-annex repositories without actual data
by default), and then (hypothetical API)

datalad get -r --annex-meta 7T_MOVIE_2mm_preproc 

to actually fetch data files present in the  7T_MOVIE_2mm_preproc  package.

Similarly, they could later run 

Since, I guess, you are composing those "packages" somehow already from a list
of rules/files, I just thought that may be those could be shared, so we could
embed that information in our annex HCP repositories and to not incur any
additional "development/setup/maintenance cost" (as to dumping listing of
generated .zip files).  Then, if just plain .txt files with listings (unlike
formatted pdfs -- easily machine readable), then people could also easily
come up with their 1 line shell scripts to fetch corresponding to packages
files from s3.

So -- overall -- listings produced by Michael would work but I wondered if we
could avoid (re)creating them and possibly make them even better for
machine-parsing (e.g. one .txt file per each package which would include paths
for files for all the subjects in that package).

BTW --   7T_MOVIE_2mm_preproc   set of files is not yet on S3 bucket.  When
will that portion be uploaded?

-- 
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Yaroslav Halchenko
On Tue, 06 Dec 2016, Elam, Jennifer wrote:
>A listing of the by subject unpacked files available, organized by
>modality and processing level, are available in Appendix 3 of the
>Reference Manual.

>The files are listed there as they unpack into a standard directory
>structure. They are not organized by ConnectomeDB packages, per se,
>because the listing is to be also applicable to users of Connectome in a
>Box and Amazon S3. If you really need a listing of the package contents
>themselves, we (Mike Hodge) can provide that separately.


On Tue, 06 Dec 2016, Hodge, Michael wrote:
> Yaroslav,

> Separate packages are created for each subject.  The list I sent just listed 
> packages for a couple of subjects to show you the files contained in the 
> packages by example.  There aren't packages that correspond to the unrelated 
> groups.  Each subject in the groups has a set of packages.  I could repeat 
> the unzip search across all subjects if you wish, but it would be a very 
> large file.


Dear Jennifer and Michael,

Thank you for your replies!

Let me may be describe my target use-case and why I was asking about
packages, which may be would make situation a bit clearer.

s3 HCP bucket provides convenient access to the dataset's individual files
but they lack annotation on what package(s) (as shipped from db.) any
particular file possibly belongs to.  But such "packaging" is important
meta-information since many folks analyze data from a particular "package".

In datalad project we would like to provide access to data from HCP bucket, but
also would like  to allow users to specify "packages" -- as to which specific
sub-datasets (e.g. not all subjects when not all subjects belong to a
package) to install and which files to download.  So it would look like
following if we assume that 7T_MOVIE_2mm_preproc  is a name of an example
package which contains a subset of subjects with 7T movie "task" data. 

datalad search 7T_MOVIE_2mm_preproc | xargs datalad install

to install those subjects' datasets (git-annex repositories without actual data
by default), and then (hypothetical API)

datalad get -r --annex-meta 7T_MOVIE_2mm_preproc 

to actually fetch data files present in the  7T_MOVIE_2mm_preproc  package.

Similarly, they could later run 

Since, I guess, you are composing those "packages" somehow already from a list
of rules/files, I just thought that may be those could be shared, so we could
embed that information in our annex HCP repositories and to not incur any
additional "development/setup/maintenance cost" (as to dumping listing of
generated .zip files).  Then, if just plain .txt files with listings (unlike
formatted pdfs -- easily machine readable), then people could also easily
come up with their 1 line shell scripts to fetch corresponding to packages
files from s3.

So -- overall -- listings produced by Michael would work but I wondered if we
could avoid (re)creating them and possibly make them even better for
machine-parsing (e.g. one .txt file per each package which would include paths
for files for all the subjects in that package).

BTW --   7T_MOVIE_2mm_preproc   set of files is not yet on S3 bucket.  When
will that portion be uploaded?

-- 
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Skyra - BOLD and diffusion MRI protocols

2016-12-06 Thread neuroimage analyst
Hi, HCP Users and Developers.

We have been trying to build a protocol for BOLD and DWI on our Skyra
(VE11C) using the latest release of HCP pulse sequences. But we
haven't been completely successful with what we wanted to achieve.

A) BOLD: We were hoping to get a TR in the range of 750ms, res = 2mm3, echo
spacing (ES) approx 0.65. We were able to get to echo spacing of 0.69 and
all the other parameters with MB = 8. However, the sequence doesn't run and
it gives us "Max amplitude overflow on gradient z axis" after MB factor
exceeds 2.  It appears to me that then we have to sacrifice TR and only run
with MB = 2. Is there anybody with a Skyra who has able to achieve what we
are hoping for and willing to share the protocol with us? OR if somebody
could guide us to resolve the error of amplitude overflow.

B) DWI: The idea was to have res = 1.5mm3, 2b-values at 1000 and 2500, 64
directions with the best BW and ES achievable. The sequence runs for 7
minutes out of total 13 minutes and gives gradient power amplifier error.
On Michael Harms's suggestion, I adjusted flip angle to 78/160 at a TR of
5500 ms with excite/refocus pulse duration of 3840 and 7680. there was a
pop up warning that RF is clipped and the maximum refocusing angle was 142
instead of 160. I placed a 64 directions diffusion vector and ran the
sequence under "free" mode. Again, I will appreciate if  there is anybody
with a Skyra who has able to achieve what we are hoping for and willing to
share the protocol with us, along with the diffusion vector sets? OR if
somebody could guide us to resolve the error.

Thank you.

Regards

--VM

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Hodge, Michael

Yaroslav,

Separate packages are created for each subject.  The list I sent just listed 
packages for a couple of subjects to show you the files contained in the 
packages by example.  There aren't packages that correspond to the unrelated 
groups.  Each subject in the groups has a set of packages.  I could repeat the 
unzip search across all subjects if you wish, but it would be a very large file.

Regards,

Mike


-Original Message-
From: hcp-users-boun...@humanconnectome.org 
[mailto:hcp-users-boun...@humanconnectome.org] On Behalf Of Yaroslav Halchenko
Sent: Tuesday, December 6, 2016 11:25 AM
To: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] (files) listing for file bundles


On Tue, 06 Dec 2016, Hodge, Michael wrote:

> I've attached unzip -l output for the packages of a couple of subjects.   One 
> has MEG data in addition to the standard 3T data,  and the other has 7T data 
> in addition to the 3T data, so you can see what's in the packages.

Thank you Michael!

So -- would that be the fair list of "bundles/packages" (with counts of 
subjects you have listed) below?

IIRC there are also packages which are defined by groups of subjects'  (U64,
UR100 in S500, U120 in S900 release) -- those are not reflected here, correct?

$> sed -ne 's,Archive: *./[0-9]*/\([^/]*\)/[0-9]*_\(.*\)\.zip,\2,gp' 
package-unzip.txt | sort | uniq -c
  2 3T_Diffusion_preproc
  2 3T_Diffusion_preproc_S500_to_S900_extension
  2 3T_Diffusion_unproc
  2 3T_rfMRI_REST1_fixextended
  2 3T_rfMRI_REST1_fixextended_S500_to_S900_extension
  2 3T_rfMRI_REST1_preproc
  2 3T_rfMRI_REST1_preproc_S500_to_S900_extension
  2 3T_rfMRI_REST1_unproc
  2 3T_rfMRI_REST2_fixextended
  1 3T_rfMRI_REST2_fixextended_S500_to_S900_extension
  2 3T_rfMRI_REST2_preproc
  2 3T_rfMRI_REST2_preproc_S500_to_S900_extension
  2 3T_rfMRI_REST2_unproc
  2 3T_rfMRI_REST_fix
  2 3T_rfMRI_REST_fix_S500_to_S900_extension
  1 3T_Structural_1.6mm_preproc
  2 3T_Structural_preproc
  2 3T_Structural_preproc_extended
  2 3T_Structural_preproc_S500_to_S900_extension
  2 3T_Structural_unproc
  2 3T_tfMRI_EMOTION_analysis_s12
  2 3T_tfMRI_EMOTION_analysis_s2
  2 3T_tfMRI_EMOTION_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_analysis_s4
  2 3T_tfMRI_EMOTION_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_analysis_s8
  2 3T_tfMRI_EMOTION_preproc
  2 3T_tfMRI_EMOTION_preproc_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_unproc
  2 3T_tfMRI_EMOTION_volume_s4
  2 3T_tfMRI_GAMBLING_analysis_s12
  2 3T_tfMRI_GAMBLING_analysis_s2
  2 3T_tfMRI_GAMBLING_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_analysis_s4
  2 3T_tfMRI_GAMBLING_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_analysis_s8
  2 3T_tfMRI_GAMBLING_preproc
  2 3T_tfMRI_GAMBLING_preproc_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_unproc
  2 3T_tfMRI_GAMBLING_volume_s4
  2 3T_tfMRI_LANGUAGE_analysis_s12
  2 3T_tfMRI_LANGUAGE_analysis_s2
  2 3T_tfMRI_LANGUAGE_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_analysis_s4
  2 3T_tfMRI_LANGUAGE_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_analysis_s8
  2 3T_tfMRI_LANGUAGE_preproc
  2 3T_tfMRI_LANGUAGE_preproc_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_unproc
  2 3T_tfMRI_LANGUAGE_volume_s4
  2 3T_tfMRI_MOTOR_analysis_s12
  2 3T_tfMRI_MOTOR_analysis_s2
  2 3T_tfMRI_MOTOR_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_analysis_s4
  2 3T_tfMRI_MOTOR_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_analysis_s8
  2 3T_tfMRI_MOTOR_preproc
  2 3T_tfMRI_MOTOR_preproc_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_unproc
  2 3T_tfMRI_MOTOR_volume_s4
  2 3T_tfMRI_RELATIONAL_analysis_s12
  2 3T_tfMRI_RELATIONAL_analysis_s2
  2 3T_tfMRI_RELATIONAL_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_analysis_s4
  2 3T_tfMRI_RELATIONAL_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_analysis_s8
  2 3T_tfMRI_RELATIONAL_preproc
  2 3T_tfMRI_RELATIONAL_preproc_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_unproc
  2 3T_tfMRI_RELATIONAL_volume_s4
  2 3T_tfMRI_SOCIAL_analysis_s12
  2 3T_tfMRI_SOCIAL_analysis_s2
  2 3T_tfMRI_SOCIAL_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_analysis_s4
  2 3T_tfMRI_SOCIAL_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_analysis_s8
  2 3T_tfMRI_SOCIAL_preproc
  2 3T_tfMRI_SOCIAL_preproc_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_unproc
  2 3T_tfMRI_SOCIAL_volume_s4
  2 3T_tfMRI_WM_analysis_s12
  2 3T_tfMRI_WM_analysis_s2
  2 3T_tfMRI_WM_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_WM_analysis_s4
  2 3T_tfMRI_WM_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_WM_analysis_s8
  2 3T_tfMRI_WM_preproc

Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Yaroslav Halchenko

On Tue, 06 Dec 2016, Hodge, Michael wrote:

> I've attached unzip -l output for the packages of a couple of subjects.   One 
> has MEG data in addition to the standard 3T data,  and the other has 7T data 
> in addition to the 3T data, so you can see what's in the packages.

Thank you Michael!  

So -- would that be the fair list of "bundles/packages" (with counts of
subjects you have listed) below?

IIRC there are also packages which are defined by groups of subjects'  (U64,
UR100 in S500, U120 in S900 release) -- those are not reflected here, correct?

$> sed -ne 's,Archive: *./[0-9]*/\([^/]*\)/[0-9]*_\(.*\)\.zip,\2,gp' 
package-unzip.txt | sort | uniq -c
  2 3T_Diffusion_preproc
  2 3T_Diffusion_preproc_S500_to_S900_extension
  2 3T_Diffusion_unproc
  2 3T_rfMRI_REST1_fixextended
  2 3T_rfMRI_REST1_fixextended_S500_to_S900_extension
  2 3T_rfMRI_REST1_preproc
  2 3T_rfMRI_REST1_preproc_S500_to_S900_extension
  2 3T_rfMRI_REST1_unproc
  2 3T_rfMRI_REST2_fixextended
  1 3T_rfMRI_REST2_fixextended_S500_to_S900_extension
  2 3T_rfMRI_REST2_preproc
  2 3T_rfMRI_REST2_preproc_S500_to_S900_extension
  2 3T_rfMRI_REST2_unproc
  2 3T_rfMRI_REST_fix
  2 3T_rfMRI_REST_fix_S500_to_S900_extension
  1 3T_Structural_1.6mm_preproc
  2 3T_Structural_preproc
  2 3T_Structural_preproc_extended
  2 3T_Structural_preproc_S500_to_S900_extension
  2 3T_Structural_unproc
  2 3T_tfMRI_EMOTION_analysis_s12
  2 3T_tfMRI_EMOTION_analysis_s2
  2 3T_tfMRI_EMOTION_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_analysis_s4
  2 3T_tfMRI_EMOTION_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_analysis_s8
  2 3T_tfMRI_EMOTION_preproc
  2 3T_tfMRI_EMOTION_preproc_S500_to_S900_extension
  2 3T_tfMRI_EMOTION_unproc
  2 3T_tfMRI_EMOTION_volume_s4
  2 3T_tfMRI_GAMBLING_analysis_s12
  2 3T_tfMRI_GAMBLING_analysis_s2
  2 3T_tfMRI_GAMBLING_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_analysis_s4
  2 3T_tfMRI_GAMBLING_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_analysis_s8
  2 3T_tfMRI_GAMBLING_preproc
  2 3T_tfMRI_GAMBLING_preproc_S500_to_S900_extension
  2 3T_tfMRI_GAMBLING_unproc
  2 3T_tfMRI_GAMBLING_volume_s4
  2 3T_tfMRI_LANGUAGE_analysis_s12
  2 3T_tfMRI_LANGUAGE_analysis_s2
  2 3T_tfMRI_LANGUAGE_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_analysis_s4
  2 3T_tfMRI_LANGUAGE_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_analysis_s8
  2 3T_tfMRI_LANGUAGE_preproc
  2 3T_tfMRI_LANGUAGE_preproc_S500_to_S900_extension
  2 3T_tfMRI_LANGUAGE_unproc
  2 3T_tfMRI_LANGUAGE_volume_s4
  2 3T_tfMRI_MOTOR_analysis_s12
  2 3T_tfMRI_MOTOR_analysis_s2
  2 3T_tfMRI_MOTOR_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_analysis_s4
  2 3T_tfMRI_MOTOR_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_analysis_s8
  2 3T_tfMRI_MOTOR_preproc
  2 3T_tfMRI_MOTOR_preproc_S500_to_S900_extension
  2 3T_tfMRI_MOTOR_unproc
  2 3T_tfMRI_MOTOR_volume_s4
  2 3T_tfMRI_RELATIONAL_analysis_s12
  2 3T_tfMRI_RELATIONAL_analysis_s2
  2 3T_tfMRI_RELATIONAL_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_analysis_s4
  2 3T_tfMRI_RELATIONAL_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_analysis_s8
  2 3T_tfMRI_RELATIONAL_preproc
  2 3T_tfMRI_RELATIONAL_preproc_S500_to_S900_extension
  2 3T_tfMRI_RELATIONAL_unproc
  2 3T_tfMRI_RELATIONAL_volume_s4
  2 3T_tfMRI_SOCIAL_analysis_s12
  2 3T_tfMRI_SOCIAL_analysis_s2
  2 3T_tfMRI_SOCIAL_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_analysis_s4
  2 3T_tfMRI_SOCIAL_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_analysis_s8
  2 3T_tfMRI_SOCIAL_preproc
  2 3T_tfMRI_SOCIAL_preproc_S500_to_S900_extension
  2 3T_tfMRI_SOCIAL_unproc
  2 3T_tfMRI_SOCIAL_volume_s4
  2 3T_tfMRI_WM_analysis_s12
  2 3T_tfMRI_WM_analysis_s2
  2 3T_tfMRI_WM_analysis_s2_S500_to_S900_extension
  2 3T_tfMRI_WM_analysis_s4
  2 3T_tfMRI_WM_analysis_s4_S500_to_S900_extension
  2 3T_tfMRI_WM_analysis_s8
  2 3T_tfMRI_WM_preproc
  2 3T_tfMRI_WM_preproc_S500_to_S900_extension
  2 3T_tfMRI_WM_unproc
  2 3T_tfMRI_WM_volume_s4
  1 7T_Diffusion_unproc
  1 7T_MOVIE_1.6mm_preproc
  1 7T_MOVIE_2mm_fix
  1 7T_MOVIE_2mm_preproc
  1 7T_MOVIE_preproc_extended
  1 7T_MOVIE_Volume_fix
  1 7T_MOVIE_Volume_preproc
  1 7T_REST_1.6mm_fix
  1 7T_REST_1.6mm_preproc
  1 7T_REST_2mm_fix
  1 7T_REST_2mm_preproc
  1 7T_REST_preproc_extended
  1 7T_REST_Volume_fix
  1 7T_REST_Volume_preproc
  1 7T_RET_1.6mm_preproc
  1 7T_RET_2mm_preproc
  1 7T_RET_preproc_extended
  1 7T_RET_Volume_preproc
  1 7T_rfMRI_REST1_unproc
  1 7T_rfMRI_REST2_unproc
  1 7T_rfMRI_

Re: [HCP-Users] (files) listing for file bundles

2016-12-06 Thread Elam, Jennifer
Hi Yaroslav,

A listing of the by subject unpacked files available, organized by modality and 
processing level, are available in Appendix 3 of the Reference 
Manual.


The files are listed there as they unpack into a standard directory structure. 
They are not organized by ConnectomeDB packages, per se, because the listing is 
to be also applicable to users of Connectome in a Box and Amazon S3. If you 
really need a listing of the package contents themselves, we (Mike Hodge) can 
provide that separately.


Best,

Jenn


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org



From: hcp-users-boun...@humanconnectome.org 
 on behalf of Yaroslav Halchenko 

Sent: Monday, December 5, 2016 11:00:43 AM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] (files) listing for file bundles

Dear HCP gurus,

db.humanconnectome.org/  provides convenient bundles of subjects/data.

Is it possible to obtain the lists of files (I guess as a subset
of files within hcp-openaccess/HCP or hcp-openaccess/HCP_900 s3 buckets)
which comes within each bundle? (without downloading all those bundles
first)

Thank you very much in advance!
--
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] thresholding the data to p-value

2016-12-06 Thread Timothy Hendrickson
Yes thank you!

On Dec 6, 2016 8:13 AM, "Dierker, Donna"  wrote:

> TimH - you mean this one?
>
> On Jul 7, 2016, at 10:26 AM, Dierker, Donna  wrote:
>
> You need to adjust your overlay settings; see attached capture.  Make sure
> Threshold Type is set to On.
>
>
>
>
> On Dec 5, 2016, at 11:16 AM, Timothy Hendrickson  wrote:
>
> Hi Donna,
>
> Could you please re-attach the screen capture that you took for Vasudev? I
> cannot find it...
> I also am looking into how to observe significance levels with a threshold
> of p<0.05.
>
> Respectfully,
>
> -Tim
>
> You need to adjust your overlay settings; see attached capture.  Make sure
> Threshold Type is set to On.
>
> 
> From:
> hcp-users-boun...@humanconnectome.org
>
> <
> hcp-users-boun...@humanconnectome.org
> > on behalf of Dev vasu
> <
> vasudevamurthy.devulapa...@gmail.com
> >
> Sent: Thursday, July 7, 2016 10:03:28 AM
> To: Dierker, Donna
> Cc: <
> hcp-users@humanconnectome.org
> >
> Subject: Re: [HCP-Users] thresholding the data to p-value
>
> Dear madam,
>
> I don't want to write to an ROI volume, i just want to observe activation
> levels with a threshold of p<0.05.
>
>
> Thanks
> Vasudev
>
> On 7 July 2016 at 16:31, Dierker, Donna
> <
> do...@wustl.edu >> wrote:
>
> Hi Vasudev,
>
>
> I'm pretty sure wb_view can display voxels within a range of values
> (inside
> upper and lower limit), but I think you mean threshold and write a ROI
> volume.
> For that, try:
>
>
>   wb_command -metric-math "(x<${Pthresh})" ${PthreshFile} -var x
> ${PstatFile}
>
>
> Donna
>
> 
> From:
>
> hcp-users-boun...@humanconnectome.org humanconnectome.org
> >
>
> <
> hcp-users-boun...@humanconnectome.org humanconnectome.org
> >>
>  on behalf of Dev vasu
> <
> vasudevamurthy.devulapa...@gmail.com y.devulapa...@gmail.com
> >>
> Sent: Wednesday, July 6, 2016 12:04:50 PM
> To: Harms, Michael;
> <
> hcp-users@humanconnectome.org >>
> Subject: [HCP-Users] thresholding the data to p-value
>
>
>
> Dear sir,
>
> I would like to threshold my results to a p-value of p<0.05 and i couldn't
> do
> it from workbench GUI, I can only threshold the data to percentage changes.
>
> Please kindly let me know how i can possibly threshold the results to real
> p-stats.
>
>
>
> Thanks
> Vasudev
>
>
>
>
> Timothy Hendrickson
> Department of Psychiatry
> University of Minnesota
> Office: 612-624-6441
> Mobile: 507-259-3434 (texts okay)
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] thresholding the data to p-value

2016-12-06 Thread Dierker, Donna
TimH - you mean this one?

On Jul 7, 2016, at 10:26 AM, Dierker, Donna 
mailto:do...@wustl.edu>> wrote:

You need to adjust your overlay settings; see attached capture.  Make sure 
Threshold Type is set to On.

[cid:B7646768-2537-4828-8045-A6B646996AD8@wustl.edu]


On Dec 5, 2016, at 11:16 AM, Timothy Hendrickson 
mailto:hendr...@umn.edu>> wrote:

Hi Donna,

Could you please re-attach the screen capture that you took for Vasudev? I 
cannot find it...
I also am looking into how to observe significance levels with a threshold of 
p<0.05.

Respectfully,

-Tim

You need to adjust your overlay settings; see attached capture.  Make sure
Threshold Type is set to On.


From:
hcp-users-boun...@humanconnectome.org

<
hcp-users-boun...@humanconnectome.org
> on behalf of Dev vasu
<
vasudevamurthy.devulapa...@gmail.com
>
Sent: Thursday, July 7, 2016 10:03:28 AM
To: Dierker, Donna
Cc: <
hcp-users@humanconnectome.org
>
Subject: Re: [HCP-Users] thresholding the data to p-value

Dear madam,

I don't want to write to an ROI volume, i just want to observe activation
levels with a threshold of p<0.05.


Thanks
Vasudev

On 7 July 2016 at 16:31, Dierker, Donna
<
do...@wustl.edu> wrote:

Hi Vasudev,


I'm pretty sure wb_view can display voxels within a range of values (inside
upper and lower limit), but I think you mean threshold and write a ROI volume.
For that, try:


  wb_command -metric-math "(x<${Pthresh})" ${PthreshFile} -var x ${PstatFile}


Donna


From:

hcp-users-boun...@humanconnectome.org

<
hcp-users-boun...@humanconnectome.org>
 on behalf of Dev vasu
<
vasudevamurthy.devulapa...@gmail.com>
Sent: Wednesday, July 6, 2016 12:04:50 PM
To: Harms, Michael;
<
hcp-users@humanconnectome.org>
Subject: [HCP-Users] thresholding the data to p-value



Dear sir,

I would like to threshold my results to a p-value of p<0.05 and i couldn't do
it from workbench GUI, I can only threshold the data to percentage changes.

Please kindly let me know how i can possibly threshold the results to real
p-stats.



Thanks
Vasudev




Timothy Hendrickson
Department of Psychiatry
University of Minnesota
Office: 612-624-6441
Mobile: 507-259-3434 (texts okay)
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users




The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Transformation matrix (bti2spm)

2016-12-06 Thread Jan Mathijs Schoffelen
Rikkert,

Please check the MEG reference documentation. On page 30 it is mentioned that 
the dipole positions are indeed by construction not regularly spaced. If you 
want to have them in normalized space you will need to replace the subject 
specific source positions with the corresponding positions of the template 
sourcemodel. The templates can be found in the template directory of the 
megconnectome software.

Best wishes,

Jan-Mathijs



> On 05 Dec 2016, at 17:21, HINDRIKS, RIKKERT  > wrote:
> 
> 
> Dear all,
> 
> I understood that in order to warp volumetric source locations (provided by 
> the MEG pipeline) to MNI space, one needs to apply the transformation matrix 
> transform.bti2spm. However, when I do this (pos_mni = 
> ft_warp_apply(transform.bti2spm,sourcemodel3d.pos)) the resulting source 
> locations are not regularly spaced. 
> 
> Shouldn't they be? 
> 
> I ask this because I want to visualize source activity in MNI space (with a 
> regular grid).
> 
> Thanks a lot and kind regards,
> Rikkert 
> 
> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org 
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users