Hi Peggy - Thanks for clarifying. I'd like to see if things work
differently when you have 6 time points vs. 2 time points. In theory, it
should not. Could you please upload for me the full set of directories
from all time points and base template (all the tracula generated
directories from each)?
Thanks!
a.y
PS: I know this was just for testing purposes, but in the future if you
want to run an analysis of the 2 time points, you should generate the base
template only from those 2 and not from all 6 time points. Especially if
you're mixing with subjects that have only 2 time points.
On Tue, 23 Jun 2015, Peggy Skelly wrote:
We generally have 2 scans per subject, pre- and post-intervention. But the
subject I sent you data for and the list of fa values earlier in this thread
has 6 scans. The runs at 7500, 20k, and 50k samples used all 6 scans. To
save time, I just ran 2 scans through the trac-all -path for the 100k run(s)
(results from 2nd run at 100k fa_avg_wgt for rh.cst = 0.532818).
Peggy
On Mon, Jun 22, 2015 at 5:10 PM, Anastasia Yendiki
<ayend...@nmr.mgh.harvard.edu> wrote:
Ok, I didn't realize you were using the longitudinal stream. So
the data sets that you sent were from one of the time points?
How many time points are there for a subject?
The longitudinal stream uses all the time points at once, which
makes the convergence a bit slower than the cross-sectional
case, so the default # of samples is somewhat higher than the
default for cross-sectional (but much less than 100K). We should
look into this with the full longitudinal set of images to see
why it's converging so much slower for your data.
On Mon, 22 Jun 2015, Peggy Skelly wrote:
Thanks for looking at the data.
1- Since the data has already been acquired, I can't
change that. For our new study we are setting up,
we'll definitely use isotropic
resolution, and we'll probably acquire a fieldmap to
correct for epi distortions.
FSL has some recommendations at FDT/FAQ and there
are also recommendations from www.birncommunity.org.
2 - I am using the longitudinal processing stream. I
usually create a bash script to call trac-all -path
with the dmrirc.txt file listing
the sessions (timepoints) and the baselist for a
single subject. I can then have a few scripts for
different subjects running
concurrently. Looking at Len_Avg for rh.cst for one
subject, there seems to be more variability between
-path runs than between
session/timepoints, even when sessions are on
different scanners (1.5T vs 3T).
Specifically (using 7500 iterations), for one run,
Len_avg of rh.cst was 41.8583. The value was the
same to 4 decimal points across 6
different sessions, 4-3T scans and 2-1.5T scans. On
a separate run, Len_avg was 50.0967, also agreeing
to 4 decimal places on the 6
different scans.
We definitely need to run longer than the default
iterations, but do all the subjects need to be run
at the same time (listed in a single
dmrirc file)? Within the longitudinal stream, during
trac-all -path, there seems to be initialization
with priors from the base space. Is
there some other interaction between
timepoints/sessions/subj_list causing
session-similar output, but run variability? I'm
wondering if
they use the same random number seed for the mcmc
algorithm?
Peggy
On Fri, Jun 19, 2015 at 3:04 PM, Anastasia Yendiki
<ayend...@nmr.mgh.harvard.edu> wrote:
Hi Peggy - Thanks for sending me the data. The
tract reconstructions look much different than what
I'm used to seeing, from
other users or our own data. Visually, it
seems to me like the 100K one is more converged. The
probability distributions of
the pathways look much sharper, so when
thresholded you don't get much of the tails of the
distribution, which would be
noisier. It's hard to tell why it's taking so
much longer to sample the distributions in your
data. The only thing I see that
stands out is that the resolution in z is
quite low (3mm), so some of the tracts (see for
example cingulum, corpus callosum)
are only 1 voxel thick in z. This combined
with partial voluming (ventricles seem enlarged)
probably introduces more
uncertainty in the data. If you can change the
aqcuisition, I'd recommend isotropic resolution (the
usual is 2mm) as
anisotropic voxels introduce bias in estimates
of diffusion anisotropy. If you have to stick with
the existing acquisition,
it looks like the default sampling settings
will have to be changed for your case. I'm happy to
help troubleshoot further.
For a longitudinal study, I recommend using
the longitudinal tracula stream. We've found that it
improves test-retest
reliability in longitudinal measurements
substantially, while also improving sensitivity to
longitudinal changes (the paper
is in review).
Best,
a.y
On Fri, 19 Jun 2015, Peggy Skelly wrote:
It's been a while, but we are still
working on computing our tracts in
tracula!
We are still struggling with variability
of the output from 'trac-all
-path'. Running with the default number
of iterations, there was enough
variability in the output from multiple
runs, that I ran longer iterations
to see if the output would converge to
more consistent values. Here is the
fa_avg_weight of the rh.cst for a single
set of dwi images processed through
tracula, then trac-all -path run several
times (with reinit=0 and default
values for nburnin & nkeep):
nsample=7500: 0.516818, 0.529206,
0.495232, 0.514368, 0.492393, 0.51711
nsample=20,000: 0.521082, 0.513492,
0.50974
nsample=50,000: 0.506167, 0.502324,
0.504106
nsample=100,000: 0.530423
Between 7500 and 50k samples, it does
seem like the algorithm is converging,
but at 100k samples the output is out of
the range of any previous runs. (I
keep looking for errors in how I ran
that one, and am running it again.)
In the reference Yendiki, et.al, 2011,
you state that the burn-in and
iterations to ensure convergence is for
future investigation. Have you had
any more thoughts or progress on this
topic?
We are doing a longitudinal analysis,
comparing FA_avg_weight over tracts
pre- and post- a 3-month therapeutic
intervention. So we anticipate rather
small changes. Do you have any
suggestions for how we handle this
variability?
Thanks,
Peggy
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
The information in this e-mail is intended
only for the person to whom it is
addressed. If you believe this e-mail was sent
to you in error and the e-mail
contains patient information, please contact
the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If
the e-mail was sent to you in error
but does not contain patient information,
please contact the sender and properly
dispose of the e-mail.
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
The information in this e-mail is intended only for the person
to whom it is
addressed. If you believe this e-mail was sent to you in error
and the e-mail
contains patient information, please contact the Partners
Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent
to you in error
but does not contain patient information, please contact the
sender and properly
dispose of the e-mail.
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.