[HCP-Users] Recognition memory task details

2017-08-17 Thread Evans, Dan
Hi HCP Team,

I've been looking for methodological details on the recognition memory task 
that occurred after the scan, but I can't find them anywhere. The Barch et al. 
2013 paper says it reports the specifics in the supplemental materials. 
However, there is no mention of the recognition memory task in the supplemental 
materials. That paper also states that the results from the recognition memory 
task will be published in a later article but I can't seem to find one. Have 
those results been published (this question was asked by someone else back in 
2014)? If so, where can I find them as well as the detailed task methodology? 
Thanks for your help!

Best,
Dan

-
Dan Evans
Research Assistant - Clinical Neuroscience Lab
Department of Psychology
Ohio State University, Columbus, OH

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] tikhonov-regularized partial correlation using FSLnets

2017-08-17 Thread Stephen Smith
Hi

This normalisation isn't directly part of the actual L2 regularisation - it is 
just setting the overall scaling of the covariance matrix, so that the effect 
of choice of regularisation parameter is not dependent on the overall scaling 
of the original data.

The scaling is just the RMS of the diagonals of the covariance - this is just a 
somewhat arbitrary choice for how to set the overall scaling of the matrix - 
other options such as just taking the mean would probably be fine too.

Cheers, Steve.




> On 17 Aug 2017, at 01:11, Mary Beth  wrote:
> 
> Hi pals,
> 
> I have a question about the way partial correlations were estimated for the 
> megatrawl using FSLnets. Going through the 'ridgep' section of 
> nets_netmats.m, it looks like the covariance matrix for each subject is 
> normalized by the square root of the mean of the variances squared - yes, no, 
> maybe so?
> 
> from line 88 of nets_netmats.m:
> 
> grot = cov();
> grot = grot/sqrt(mean(diag(cov1).^2));
> 
> I'm just trying to figure out why you square the variances before you take 
> the mean. Can someone give me a quick explanation or point me towards a good 
> reference?
> 
> Thanks in advance for your help!
> mb 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 


---
Stephen M. Smith, Professor of Biomedical Engineering
Head of Analysis,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
st...@fmrib.ox.ac.ukhttp://www.fmrib.ox.ac.uk/~steve 

---

Stop the cultural destruction of Tibet 






___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] tikhonov-regularized partial correlation using FSLnets

2017-08-17 Thread Mary Beth
Hi pals,

I have a question about the way partial correlations were estimated for the
megatrawl using FSLnets. Going through the 'ridgep' section of
nets_netmats.m, it looks like the covariance matrix for each subject is
normalized by the square root of the mean of the variances squared - yes,
no, maybe so?

from line 88 of nets_netmats.m:

grot = cov();
grot = grot/sqrt(mean(diag(cov1).^2));

I'm just trying to figure out why you square the variances before you take
the mean. Can someone give me a quick explanation or point me towards a
good reference?

Thanks in advance for your help!
mb

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Downloading from the S3 browser

2017-08-17 Thread Elam, Jennifer
This thread might be helpful to you: 
http://www.mail-archive.com/hcp-users@humanconnectome.org/msg03097.html

The data in Amazon S3 is not available in download packages as they are in 
ConnectomeDB. Instead data are organized in the directory structure as all the 
unzipped data unpacks for each subject.


You will also have to get your subject group from ConnectomeDB. If you want the 
largest list of unrelated subjects from the S1200 for your analysis, you can 
use the filters find subjects with the data that you need, and use the 
Family_ID variable to choose subjects from different families. Viewing the 
Family_ID variable and other family structure data require HCP restricted 
access. Let me know if you need help with this.


Best,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org



From: hcp-users-boun...@humanconnectome.org 
 on behalf of Manasij Venkatesh 

Sent: Thursday, August 17, 2017 12:47:21 PM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] Downloading from the S3 browser

Hi,

Is there an easy way to download working memory data for the unrelated subjects 
group from the S3 browser?

I'm having some trouble with Aspera Connect, the connection fails often and I 
have to restart the whole procedure. The downloading is much faster on the S3 
browser, but I can't find an easy way to download just the files I need.

Sincerely,
Manasij

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] help: extract MNI coordinates of tfMRI blobs from workbench

2017-08-17 Thread Glasser, Matthew
Nope, you can do the tractography directly from the surfaces using FSL: 
https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FDT/UserGuide#Using_surfaces  Use the 
surfaces in ${StudyFolder}/${Subject}/T1w/fsaverage_LR32k.  White surfaces are 
good for counting and pial surfaces are good for stopping.  There are 
wb_commands you could use if you want to extract ROIs from task fMRI maps.  
Will you be using individual subject task fMRI maps or group maps?  If it is 
individual subjects, I would define the ROIs based on the peak gradient 
boundaries of the effect size maps, as these are the biologically significant 
functional transitions.

Peace,

Matt.

From: Yin Wang >
Date: Thursday, August 17, 2017 at 12:34 PM
To: "hcp-users@humanconnectome.org" 
>
Cc: Matt Glasser >
Subject: help: extract MNI coordinates of tfMRI blobs from workbench

Dear HCP team,

We are interested in using both task fMRI and diffusion MRI data to study white 
matter pathways between functional cortical regions.  We are planning to use 
task fMRI activation to define several ROIs as seeds for dMRI tractography. We 
have several questions below and hope someone can help us.

As far as I understand, all tfMRI results (e.g. contrast maps) for each subject 
are in grayordinate space (let us only limit the discussion to the cortex), but 
the dMRI is volumetric data. That means I have to extract the MNI coordinates 
for each ROI from the contrast maps so that I can use them directly for dMRI 
tractography. How can I do this procedure from connectome workbench?

1. Usually when I do this in SPM (using traditional volumetric data), I need to 
first set up a threshold (e.g. p<0.05 uncorrected) to get the blobs and then 
write down their peak coordinates for further DTI analyses. However in the 
workbench, I cannot find a button to set a p value. All I can do in workbench 
is to manually slide the threshold bar from the Overlay Toolbox (i.e. Overlay 
and map settings—Palette--Threshold). What does the value mean in the threshold 
map (e.g. are these CIFTI SCALARS t-values or z-scores)? How can I transform it 
to the typical p value? Do you know any functions in the workbench that I can 
set a fixed threshold across subjects?

2. Let’s assume I now got a blob from certain threshold, how can I get its MNI 
coordinates from the workbench? I clicked the blob in the montage view and the 
information window gave me some XYZ coordinates. First, which surface should I 
use? midthickness, inflated or very_inflated? I found their vertex number for 
the same blob is identical but the xyz coordinates in each surface are quite 
different? Second, in SPM, the MNI coordinates are integers (e.g. -51, -45, 8), 
but why the xyz coordinates provided by the information window have decimal 
points (e.g. -51.2555, -45.8909, 8.1537)? Do I just round them up?

3. We are planning to extract each ROI’s MNI coordinates from each individual’s 
specific tfMRI contrast maps. That means we have to manually extract values 
from the information window for each of 1200 subjects. Are there any automated 
scripts (for workbench) that we can use to extract coordinate information from 
peak site of a blob (and batch for multiple subjects)?

Sorry for so many questions and we appreciate any helps and advice.

Best
Yin


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Downloading from the S3 browser

2017-08-17 Thread Manasij Venkatesh
Hi,

Is there an easy way to download working memory data for the unrelated
subjects group from the S3 browser?

I'm having some trouble with Aspera Connect, the connection fails often and
I have to restart the whole procedure. The downloading is much faster on
the S3 browser, but I can't find an easy way to download just the files I
need.

Sincerely,
Manasij

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] help: extract MNI coordinates of tfMRI blobs from workbench

2017-08-17 Thread Yin Wang
Dear HCP team,

We are interested in using both task fMRI and diffusion MRI data to study
white matter pathways between functional cortical regions.  We are planning
to use task fMRI activation to define several ROIs as seeds for dMRI
tractography. We have several questions below and hope someone can help us.

As far as I understand, all tfMRI results (e.g. contrast maps) for each
subject are in grayordinate space (let us only limit the discussion to the
cortex), but the dMRI is volumetric data. That means I have to extract the
MNI coordinates for each ROI from the contrast maps so that I can use them
directly for dMRI tractography. How can I do this procedure from connectome
workbench?

1. Usually when I do this in SPM (using traditional volumetric data), I
need to first set up a threshold (e.g. p<0.05 uncorrected) to get the blobs
and then write down their peak coordinates for further DTI analyses.
However in the workbench, I cannot find a button to set a p value. All I
can do in workbench is to manually slide the threshold bar from the Overlay
Toolbox (i.e. Overlay and map settings—Palette--Threshold). What does the
value mean in the threshold map (e.g. are these CIFTI SCALARS t-values or
z-scores)? How can I transform it to the typical p value? Do you know any
functions in the workbench that I can set a fixed threshold across subjects?

2. Let’s assume I now got a blob from certain threshold, how can I get its
MNI coordinates from the workbench? I clicked the blob in the montage view
and the information window gave me some XYZ coordinates. First, which
surface should I use? midthickness, inflated or very_inflated? I found
their vertex number for the same blob is identical but the xyz coordinates
in each surface are quite different? Second, in SPM, the MNI coordinates
are integers (e.g. -51, -45, 8), but why the xyz coordinates provided by
the information window have decimal points (e.g. -51.2555, -45.8909,
8.1537)? Do I just round them up?

3. We are planning to extract each ROI’s MNI coordinates from each
individual’s specific tfMRI contrast maps. That means we have to manually
extract values from the information window for each of 1200 subjects. Are
there any automated scripts (for workbench) that we can use to extract
coordinate information from peak site of a blob (and batch for multiple
subjects)?

Sorry for so many questions and we appreciate any helps and advice.

Best
Yin

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users