Thanks! Matt!
I have one more following-up question. In order to run the script
"hcp_fix_multi_run", we have to concatenate all the data temporally, right?
I combined the data using following command: fslmerge -t
….
Then, I ran the hcp_fix_multi_run script as follow: hcp_fix_multi_run
No multi-run ICA+FIX handles the concatenation for you so you specify the
separate runs. That is the whole point. Have a look at this bioRvix paper in
the methods about multi-run ICA+FIX so you understand why it is implemented the
way it is:
It is worth noting that there IS a biological distance bias in connections that
has been found with invasive tracers, though the mechanism by which this occurs
in in tractography is different from the biological mechanism as Tim says.
There’s more discussion of this in the paper I referenced.
Hi Matt/Tim,
My goal is to improve network inference from tractography data by better
accounting for the distance bias in tractography, so I want to use some proxy
for actual connection distance between ROI pairs. Using tractography itself to
account for its own bias against long-distance
I would not do #2 as you need to do some preprocessing prior to running ICA+FIX
when concatenating across runs and this is all that the multi-run ICA+FIX
pipeline does differently from regular ICA+FIX.
I’ll let Steve answer that other question.
Matt.
From: Sang-Young Kim
However that is actually how it is often done. Actually in so far as the
connections follow the right path, tractography should give the best estimates
and we used it that way in this paper:
http://www.jneurosci.org/content/36/25/6758.short
Peace,
Matt.
From: "Gopalakrishnan, Karthik"
Right, I wasn't very precise in my wording. I was thinking of the
"tractography distance bias" as the amount of the bias that is above and
beyond the real biological distance relationship.
Tim
On Fri, Oct 6, 2017 at 4:36 PM, Glasser, Matthew wrote:
> It is worth noting
This was useful and I shall make sure to go through the paper you referenced,
Matt, thank you both Matt and Tim!
Tim, I notice you mention that tractography reports distances as well, which
shouldn’t have the same bias as the reported number of streamlines/connection
strength — I wasn’t really
We did it for all to all connectivity at the vertex level and then did a
weighted average according to number of streamlines (as the more streamlines,
the more robust the distance measure).
Peace,
Matt.
From: Timothy Coalson >
Date: Friday, October 6,
Would that require them to run a separate probtrackx for each seed area?
If the hits are recorded on the surface, is the distance also reported on
vertices?
If you can only get the distances in white matter voxels, or only the
distances from the seed point, things could get challenging if you
Tractography's distance bias is in its reported strengths. The distances
reported by tractography should not have a significant bias in the same way
- while it takes longer paths less often, it doesn't often take paths that
are even more windy and longer than the real path, and it generally can't
--ompl option in probtrackx2.
Peace,
Matt.
From: "Gopalakrishnan, Karthik"
>
Date: Friday, October 6, 2017 at 7:05 PM
To: Timothy Coalson >
Cc: Matt Glasser >,
Hi all
to chime in: we have done extensive work over the past year to replicate
and understand the prediction of IQ obtained in the Finn study. Our
manuscript is about to be submitted. Take away points:
-- the high effect size they find is partly due to small sample size (118
subjects) and to the
Hi, Matt and Stephen:
Thanks for your responses. So I will try below three options to see which one
is better.
1. ICA+FIX on each 5 min run separately
2. Concatenate each pair of scans from each session and then ICA+FIX on each
session
3. Use multi-run ICA+FIX to combine across runs
I have
Anyone aware of the timeline for the genomewide data to be released? Best,
James
Sent from my iPhone
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
There is a beta version of a multi-run ICA+FIX pipeline available in the HCP
Pipeline’s repository. For 5 minute runs, I would expect combining across runs
to be best. We haven’t tested combining across sessions yet, so you would have
to check that that was working okay if you wanted to try
Hi all
Yes - I've discussed this with Todd and it's not immediately clear whether the
difference is due to:
- they used full correlation not partial
- they used fewer confound regressors (IIRC)
- their prediction method is *very* different (pooling across all relevant
features rather than
17 matches
Mail list logo