Hi all,
interesting points! I will use both variance normalization techniques and
test if there are any differences in the resulting DCMs. It might be worth
noting, that for the resting state, cross-spectra are being fitted nowadays
Hi Tim,
That isn’t quite an analogous situation. At least for full correlation,
computed from the 15 min runs of the HCP-YA, computing a separate network
matrix for each individual 15 min run, Fisher transforming those, and then
averaging those r-to-z’s appears to be a little bit more robust
When we compute parcellated connectivity, we first compute the average
timeseries within the parcels, and then correlate those, as it vastly
reduces the impact of noise. If we first computed the correlations, and
then averaged them within parcels, we would be losing a huge amount of
power.
The
In the case of correlations or partial correlations, I would tend to compute
those separately for each run anyway, Fisher transform them, and then average
the r-to-z values across runs. In which case no across-run concatentation is
necessary in the first place.
I don’t know if a per-run DCM
The basic idea for variance normalization is to equalize the variance of the
noise. It is very helpful for ICA and regression-based techniques. I’m not
sure we have explicitly tested the effect on correlation. Correlation is a
ratio and so it would not matter at all for a single run, though
Hi all,
that being said, why is this regression approach for variance normalization
superior to a z-standardization? That is, will it practically matter e.g.
for correlations or partial correlations?
2018-03-07 19:31 GMT+01:00 Glasser, Matthew :
> Hi Mike,
>
> I doubt that
Hi Mike,
I doubt that matters for this application of making an unstructured noise
timeseries for the purpose of variance normalization.
Matt.
From: "Harms, Michael" >
Date: Wednesday, March 7, 2018 at 12:09 PM
To: Matt Glasser
Hi Matt,
Right, that recipe is straightforward, but for completeness there should be two
additional steps if one wants to match the FIX cleaning precisely:
1) the 24 motion parameters should be filtered with the same HP filter applied
to the data
2) those HP filtered 24 motion parameters should
No, however if he wanted to interpret effect size maps after having done this,
he would need to back this out, back out the old bias field, and apply the
corrected one. The new multiple regression takes care of adapting to whatever
bias field there is or is not.
Matt.
From: "Harms, Michael"
Hi Mike,
Not for the volume data that he is asking about and not for the MSMAll data
either unfortunately. I thought it was better to explain this method on the
list so that it can be applied to arbitrary data whether or not we precomputed
it.
Matt.
From: "Harms, Michael"
Also, if David were to do this with HCP-YA data, does he additionally need to
worry about “backing out” the bias field normalization?
--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School of Medicine
Matt,
Don’t we compute an estimate of the unstructured noise variance as part of
RestingStateState, and then place that into one of the packages?
--
Michael Harms, Ph.D.
---
Associate Professor of Psychiatry
Washington University School
Awesome, I will do that, thanks very much!!
2018-03-07 18:01 GMT+01:00 Glasser, Matthew :
> Yes they should be in that same package:
>
> ${StudyFolder}/${Subject}/MNINonLinear/Results/${
> fMRIName}/${fMRIName}_hp2000.ica/.fix — Tells you which are the noise
> components (so
Yes they should be in that same package:
${StudyFolder}/${Subject}/MNINonLinear/Results/${fMRIName}/${fMRIName}_hp2000.ica/.fix
— Tells you which are the noise components (so you can use setdiff to find the
signal components from a list of all components) so that you can exclude the
noise
Ah I understand. However, I'm not sure how to do this practically for the
FIX extended data. I'd need all the signal component timeseries and run a
regression for each voxel which might take a while. I'm not sure if the
signals are supplied in the dataset, or are they?
Thanks for the support!
The unstructured noise variance is the standard deviation of the timeseries
after you regress out all of the signal component timeseries. By doing this
you make the unstructured noise equal in magnitude across the brain.
I wouldn’t do smoothing unless it is constrained to the greymatter.
I typically variance normalize before concatenation, but do this based on the
unstructured noise variance.
I would take the mean time course over an ROI that I thought to be
representative of a meaningful neuroanatomical subunit.
My understanding of how SPM’s DCM is typically implemented is
Thanks
On l-Erbgħa, 07 ta Mar, 2018 03:44 , Harwell, John wrote:
This error has been reported before (WB-647 in our internal bug
tracking system). The solution for the user that experienced this
problem was to use the version of Workbench distributed through the
HCP website
Hi Matthew,
ok, so temporal filtering separately for each run. Any comments on
concatenation and z-standardization?
I think there might be a work-around to supplying a custom ROI timecourse
to the DCM VOI-files somehow, but which values to input as alternative to
the eigenvariate? The mean over
This error has been reported before (WB-647 in our internal bug tracking
system). The solution for the user that experienced this problem was to use
the version of Workbench distributed through the HCP website
(https://www.humanconnectome.org/software/get-connectome-workbench).
From some
You would want to apply temporal filtering separately to each run. I wonder if
there is a way you could just provide the ROI timecourses to SPM’s DCM model
without using its tools for extracting the ROIs so that you could avoid the
issues spatial localization that SPM has. If you used areal
apologies Ubuntu 16.04 LTS installed from the neurodebian repo
On l-Erbgħa, 07 ta Mar, 2018 02:44 , Glasser, Matthew wrote:
What OS is this on?
Peace,
Matt.
From: > on behalf of Claude
Bajada
What OS is this on?
Peace,
Matt.
From:
>
on behalf of Claude Bajada
>
Date: Wednesday, March 7, 2018 at 4:25 AM
To:
Hi all,
I am encountering a problem when trying to create a scene with wb_view. As soon
as I try to add my scene I get a core dump:
[cid:part1.C307F21E.5B1978C7@fz-juelich.de]
Anyone has a similar problem and know a solution?
Claude
Hi all,
for a later analysis where I extract ROIs with SPM, I need to concatenate
the resting state runs and want to make sure I'm doing it correctly. SPM
extracts the first eigenvariate of a ROI, i.e. the component that explains
the most variance.
I'm using the* Resting State fMRI 1
25 matches
Mail list logo