Ah I understand. However, I'm not sure how to do this practically for the
FIX extended data. I'd need all the signal component timeseries and run a
regression for each voxel which might take a while. I'm not sure if the
signals are supplied in the dataset, or are they?

Thanks for the support!

2018-03-07 17:07 GMT+01:00 Glasser, Matthew <glass...@wustl.edu>:

> The unstructured noise variance is the standard deviation of the
> timeseries after you regress out all of the signal component timeseries.
> By doing this you make the unstructured noise equal in magnitude across the
> brain.
>
> I wouldn’t do smoothing unless it is constrained to the greymatter.
> Really you won’t get an obvious benefit if you will be averaging voxels in
> an ROI anyway and that is a more accurate way to do things.
>
> I guess I don’t know enough about your study to know if the order
> matters.  If you are interested in effects that might be related to order
> (e.g. drowsiness being higher in later scans, then order might matter).
>
> Peace,
>
> Matt.
>
> From: David Hofmann <davidhofma...@gmail.com>
> Date: Wednesday, March 7, 2018 at 10:02 AM
>
> To: Matt Glasser <glass...@wustl.edu>
> Cc: hcp-users <hcp-users@humanconnectome.org>
> Subject: Re: [HCP-Users] Concatenating resting state runs
>
> Hey Matthew,
>
> not sure I understood where to get the unstructured noise variance from,
> i.e. is it even possible to apply this to the FIX extended datasets?
>
> I thought about using 4mm smoothing (maybe 2mm) before extracting the VOIs
> / ROI timecourses for each subject. This is then fed into the DCMs for each
> subject. I experimented with some HCP data before and it seems
> smoothing increases the effect sizes a little bit. What is smoothing
> between parcellations btw.?
>
> Also, any comments on the order of concatenation? I concatenate all of the
> data RL and then LR.
>
> 2018-03-07 16:17 GMT+01:00 Glasser, Matthew <glass...@wustl.edu>:
>
>> I typically variance normalize before concatenation, but do this based on
>> the unstructured noise variance.
>>
>> I would take the mean time course over an ROI that I thought to be
>> representative of a meaningful neuroanatomical subunit.
>>
>> My understanding of how SPM’s DCM is typically implemented is that there
>> are large amounts of spatial smoothing, cross-subject alignment is done in
>> the volume, and ROIs are spheres of some radius.  All this would lead to a
>> lot of mixing of timecourses.  My suggestion was to use parcel timecourses
>> from some kind of parcellation.  If you have a good amygdala parcellation
>> that might be fine, though I would avoid smoothing the data between the
>> parcels.
>>
>> Peace,
>>
>> Matt.
>>
>> From: David Hofmann <davidhofma...@gmail.com>
>> Date: Wednesday, March 7, 2018 at 9:12 AM
>> To: Matt Glasser <glass...@wustl.edu>
>> Cc: hcp-users <hcp-users@humanconnectome.org>
>> Subject: Re: [HCP-Users] Concatenating resting state runs
>>
>> Hi Matthew,
>>
>> ok, so temporal filtering separately for each run. Any comments on
>> concatenation and z-standardization?
>>
>> I think there might be a work-around to supplying a custom ROI timecourse
>> to the DCM VOI-files somehow, but which values to input as alternative to
>> the eigenvariate? The mean over all voxels in the ROI would also be an
>> option but not sure what you had in mind.
>>
>> Can you elaborate on the issue of spatial localization you mention
>> please, not sure I understood? I'm using mask files to extract the time
>> courses and I am especially interested in amygdala subregions.
>>
>> Also, what do you mean by areal ROIs and that they give a purer signal?
>>
>> Thanks :)
>>
>> 2018-03-07 14:51 GMT+01:00 Glasser, Matthew <glass...@wustl.edu>:
>>
>>> You would want to apply temporal filtering separately to each run.  I
>>> wonder if there is a way you could just provide the ROI timecourses to
>>> SPM’s DCM model without using its tools for extracting the ROIs so that you
>>> could avoid the issues spatial localization that SPM has.  If you used
>>> areal ROIs, you likely wouldn’t even need the eigenvariate approach as you
>>> would be getting a much purer signal.
>>>
>>> Peace,
>>>
>>> Matt.
>>>
>>> From: <hcp-users-boun...@humanconnectome.org> on behalf of David
>>> Hofmann <davidhofma...@gmail.com>
>>> Date: Wednesday, March 7, 2018 at 2:32 AM
>>> To: hcp-users <hcp-users@humanconnectome.org>
>>> Subject: [HCP-Users] Concatenating resting state runs
>>>
>>> Hi all,
>>>
>>> for a later analysis where I extract ROIs with SPM, I need to
>>> concatenate the resting state runs and want to make sure I'm doing it
>>> correctly. SPM extracts the first eigenvariate of a ROI, i.e. the component
>>> that explains the most variance.
>>>
>>> I'm using the* Resting State fMRI 1 FIX-Denoised (Extended)* and *Resting
>>> State fMRI 2 FIX-Denoised (Extended)* datasets.  That is, the
>>> files: rfMRI_REST1_LR_hp2000_clean.nii, rfMRI_REST1_RL
>>> _hp2000_clean.nii asf.
>>>
>>> I chose the following approach:
>>>
>>> 1.  z-standardize each session (each voxel timecourse), i.e. RL, LR
>>> separately
>>> 2. Then concatenate them
>>> 3. Run the SPM routines which will also apply a high-pass filter of
>>> about 128s on the already concatenated data (it's for the processing of a
>>> DCM rather than functional connectivity)
>>>
>>> I have the following questions:
>>>
>>> 1. Is this approach correct?
>>> 2. Does the order of concatenation matter? That is, (RL/LR or LR/RL) or
>>> is it important to concatenate it in the order it was acquired in each
>>> subject? I read that it sometimes changes between subjects such that LR
>>> came first in one subject and RL first in another.
>>> 3. Since SPM will run a hp-filter on the concatenated data, would it be
>>> better to hp filter each run *separately* before concatenation?
>>> 4. Is this approach also applicable to the task data (i.e. standardize
>>> and filter separately before concatenation)?
>>>
>>> Thanks in advance
>>>
>>> David
>>>
>>>
>>> _______________________________________________
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>
>>
>
>

_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to