[Freesurfer] dmri_poistats

2016-08-23 Thread Alshikho, Mohamad J.
Dear Freesurfer experts,
I would like to inquire about the tool "dmri_poistats" 
https://surfer.nmr.mgh.harvard.edu/fswiki/PoistatsOverview. This tool is 
included as a beta release ( in FS5.3 and FS 6.0).

In literature review I found colleagues used the version 1.4  of this tool. 
e.g. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2780023/

Kindly, I have the following two questions:

1.   What is the final release of this tool?

2.   I have DWI images and no T1 images are available. Is it reliable to do 
tractography for the corticospinal tract using this tool?

Thank you for any advice!
Mohamad

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] Using Freesurfer for skull stripping then reorienting images to use in FSL

2016-08-23 Thread sarai...@alumni.ubc.ca
Hello,

We are using freesurfer to skull strip T1 images on a high-motion pediatric 
sample. We want to use freesurfer to get brain-only images and then do further 
analysis using FSL. The problem is that, in freesurfer, the images are in a 
different orientation than those in FSL.  For example, the same brain extracted 
image in freesurfer seems to be rotated a bit more to the left than the image 
extracted in FSL.

QUESTION: Does anyone with freesurfer or FSL knowledge know how to convert the 
freesurfer’s brain-only images so that we can use them in FSL for ICA (both 
group and individual) as well as TBSS and Tractography ( back on freesurfer)?

Thank you for your time and expertise.

Cheers,
Sara

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Anastasia Yendiki


Yes, it shouldn't hurt.

On Tue, 23 Aug 2016, Harms, Michael wrote:



Ok.  We’ll give it a try.  Just to confirm, you're agreeing that
increasing nburnin and nsample is a “good” thing, right?  (e.g., 1000, and
15000, respectively?)  If anything, we should be less likely to get narrow
tracts when using larger nburnin/nsample values, right?

thanks,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 8/23/16, 5:09 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
Anastasia Yendiki"  wrote:


Hi Michael - Even if it starts very close to the true max of the
distribution, in practice it'll never stay put. MCMC will accept a new
sample path with probability 1 if the new sample has greater probability
than the previous sample, and it will also accept it with some very small
probability if it doesn't. This means that it'll still explore the space
around the max, even if it ends up returning to the max pretty quickly. So
most likely there's something weird about the initial path if it doesn't
move at all.

Best,

a.y

On Tue, 23 Aug 2016, Harms, Michael wrote:



Hi Anastasia,
My interpretation of the “reinit” parameter is that it is for situations
where a narrow probability distribution is assumed to be incorrect.  But
how do you know whether it is indeed incorrect, or whether in fact the
true equilibrium distribution is (correctly) very narrow?

In particular, my understanding of MCMC is that higher burn-in, and more
sampling iterations are only a “good” thing.  i.e., If we have the time
and compute resources, we shouldn’t hesitate to increase them from their
defaults, to help to make sure we are capturing the true equilibrium
distribution.  So, if increasing the nburnin and nsample values makes it
more likely to find spatially narrow tract distributions, isn’t that a
sign that the true distribution should indeed be narrow?

thanks,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 8/23/16, 4:44 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
Anastasia Yendiki"  wrote:


Hi Dillan - There's a work-around for this, see the reinit variable at
the
bottom of the sample config file:
http://surfer.nmr.mgh.harvard.edu/fswiki/dmrirc

I'm hoping to make this happen automatically soon!

Best,

a.y

On Tue, 23 Aug 2016, Newbold, Dillan wrote:


Dear Anastasia,

I’ve been looking at a lot of Tracula path.pd files and I’ve found that
some probability distributions are only a single voxel wide, similar to
the path.map file. The few none-zero voxels in these path.pd files have
very high probability values. When an isosurface is generated for these
tracts, it looks like a short thin blob somewhere in the usual tract
distribution. I’ve seen descriptions in the archives of similar “short
thin tracts,” but, from what I have seen, no one has offered a
satisfying
explanation for why these occur.

What I think is happening in these tracts is that a maximum-probability
(or local maximum) path is found during a burn-in iteration and all
following perturbations of that path are rejected. Since the probability
value in the path.pd is equal to the number of sample paths intersecting
that voxel, finding a local maximum early on results in a small number
of
very high-probability voxels. Consistent with this explanation, I’ve
found that this issue occurs more frequently when nburnin is set to 1000
(default = 200). A similar issue can occur if a local maximum is found
early during the sample iterations, and this results in a path.pd file
containing a small number of voxels with very high values surrounded by
a
larger area of low-value voxels. When a 20% threshold is applied, the
result is the same as when a local maximum occurs during a burn-in
iteration.

Does my understanding of this issue seem correct?

None of this would be a problem if my only aim were to find the single
path with the maximum a posteriori probability, but I’m concerned that
the average and weighted_average sats for these tracts will be less
accurate. Since these distributions include small fractions of the
number
of voxels included in most tract distributions, is it likely that the
average and weighted_average stats from these narrow distributions are
less representative of the whole tract and more subject to random noise?

Given these concerns, what type of overall path statistics 

[Freesurfer] TRACULA: narrow probability distribution

2016-08-23 Thread Newbold, Dillan
Dear Anastasia,

I’ve been looking at a lot of Tracula path.pd files and I’ve found that some 
probability distributions are only a single voxel wide, similar to the path.map 
file. The few none-zero voxels in these path.pd files have very high 
probability values. When an isosurface is generated for these tracts, it looks 
like a short thin blob somewhere in the usual tract distribution. I’ve seen 
descriptions in the archives of similar “short thin tracts,” but, from what I 
have seen, no one has offered a satisfying explanation for why these occur.

What I think is happening in these tracts is that a maximum-probability (or 
local maximum) path is found during a burn-in iteration and all following 
perturbations of that path are rejected. Since the probability value in the 
path.pd is equal to the number of sample paths intersecting that voxel, finding 
a local maximum early on results in a small number of very high-probability 
voxels. Consistent with this explanation, I’ve found that this issue occurs 
more frequently when nburnin is set to 1000 (default = 200). A similar issue 
can occur if a local maximum is found early during the sample iterations, and 
this results in a path.pd file containing a small number of voxels with very 
high values surrounded by a larger area of low-value voxels. When a 20% 
threshold is applied, the result is the same as when a local maximum occurs 
during a burn-in iteration.

Does my understanding of this issue seem correct?

None of this would be a problem if my only aim were to find the single path 
with the maximum a posteriori probability, but I’m concerned that the average 
and weighted_average sats for these tracts will be less accurate. Since these 
distributions include small fractions of the number of voxels included in most 
tract distributions, is it likely that the average and weighted_average stats 
from these narrow distributions are less representative of the whole tract and 
more subject to random noise?

Given these concerns, what type of overall path statistics do you think is most 
descriptive of a tract? Also, do you feel that higher nburnin and nsample 
values should lead to superior results? I would have thought this to be the 
case, but now it seems to me that setting either of these values too high will 
result in narrow probability distributions and bad statistics.

Thank you,
Dillan

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Harms, Michael

Ok.  We’ll give it a try.  Just to confirm, you're agreeing that
increasing nburnin and nsample is a “good” thing, right?  (e.g., 1000, and
15000, respectively?)  If anything, we should be less likely to get narrow
tracts when using larger nburnin/nsample values, right?

thanks,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 8/23/16, 5:09 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
Anastasia Yendiki"  wrote:


Hi Michael - Even if it starts very close to the true max of the
distribution, in practice it'll never stay put. MCMC will accept a new
sample path with probability 1 if the new sample has greater probability
than the previous sample, and it will also accept it with some very small
probability if it doesn't. This means that it'll still explore the space
around the max, even if it ends up returning to the max pretty quickly. So
most likely there's something weird about the initial path if it doesn't
move at all.

Best,

a.y

On Tue, 23 Aug 2016, Harms, Michael wrote:

>
> Hi Anastasia,
> My interpretation of the “reinit” parameter is that it is for situations
> where a narrow probability distribution is assumed to be incorrect.  But
> how do you know whether it is indeed incorrect, or whether in fact the
> true equilibrium distribution is (correctly) very narrow?
>
> In particular, my understanding of MCMC is that higher burn-in, and more
> sampling iterations are only a “good” thing.  i.e., If we have the time
> and compute resources, we shouldn’t hesitate to increase them from their
> defaults, to help to make sure we are capturing the true equilibrium
> distribution.  So, if increasing the nburnin and nsample values makes it
> more likely to find spatially narrow tract distributions, isn’t that a
> sign that the true distribution should indeed be narrow?
>
> thanks,
> -MH
>
> --
> Michael Harms, Ph.D.
>
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.Tel: 314-747-6173
> St. Louis, MO  63110Email: mha...@wustl.edu
>
>
>
>
> On 8/23/16, 4:44 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
> Anastasia Yendiki"  ayend...@nmr.mgh.harvard.edu> wrote:
>
>
> Hi Dillan - There's a work-around for this, see the reinit variable at
>the
> bottom of the sample config file:
> http://surfer.nmr.mgh.harvard.edu/fswiki/dmrirc
>
> I'm hoping to make this happen automatically soon!
>
> Best,
>
> a.y
>
> On Tue, 23 Aug 2016, Newbold, Dillan wrote:
>
>> Dear Anastasia,
>>
>> I’ve been looking at a lot of Tracula path.pd files and I’ve found that
>> some probability distributions are only a single voxel wide, similar to
>> the path.map file. The few none-zero voxels in these path.pd files have
>> very high probability values. When an isosurface is generated for these
>> tracts, it looks like a short thin blob somewhere in the usual tract
>> distribution. I’ve seen descriptions in the archives of similar “short
>> thin tracts,” but, from what I have seen, no one has offered a
>>satisfying
>> explanation for why these occur.
>>
>> What I think is happening in these tracts is that a maximum-probability
>> (or local maximum) path is found during a burn-in iteration and all
>> following perturbations of that path are rejected. Since the probability
>> value in the path.pd is equal to the number of sample paths intersecting
>> that voxel, finding a local maximum early on results in a small number
>>of
>> very high-probability voxels. Consistent with this explanation, I’ve
>> found that this issue occurs more frequently when nburnin is set to 1000
>> (default = 200). A similar issue can occur if a local maximum is found
>> early during the sample iterations, and this results in a path.pd file
>> containing a small number of voxels with very high values surrounded by
>>a
>> larger area of low-value voxels. When a 20% threshold is applied, the
>> result is the same as when a local maximum occurs during a burn-in
>> iteration.
>>
>> Does my understanding of this issue seem correct?
>>
>> None of this would be a problem if my only aim were to find the single
>> path with the maximum a posteriori probability, but I’m concerned that
>> the average and weighted_average sats for these tracts will be less
>> accurate. Since these distributions include small fractions of the
>>number
>> of voxels included in most tract distributions, is it likely that the
>> average and weighted_average stats from these narrow distributions are
>> less representative 

Re: [Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Anastasia Yendiki


Hi Michael - Even if it starts very close to the true max of the 
distribution, in practice it'll never stay put. MCMC will accept a new 
sample path with probability 1 if the new sample has greater probability 
than the previous sample, and it will also accept it with some very small 
probability if it doesn't. This means that it'll still explore the space 
around the max, even if it ends up returning to the max pretty quickly. So 
most likely there's something weird about the initial path if it doesn't 
move at all.


Best,

a.y

On Tue, 23 Aug 2016, Harms, Michael wrote:



Hi Anastasia,
My interpretation of the “reinit” parameter is that it is for situations
where a narrow probability distribution is assumed to be incorrect.  But
how do you know whether it is indeed incorrect, or whether in fact the
true equilibrium distribution is (correctly) very narrow?

In particular, my understanding of MCMC is that higher burn-in, and more
sampling iterations are only a “good” thing.  i.e., If we have the time
and compute resources, we shouldn’t hesitate to increase them from their
defaults, to help to make sure we are capturing the true equilibrium
distribution.  So, if increasing the nburnin and nsample values makes it
more likely to find spatially narrow tract distributions, isn’t that a
sign that the true distribution should indeed be narrow?

thanks,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 8/23/16, 4:44 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
Anastasia Yendiki"  wrote:


Hi Dillan - There's a work-around for this, see the reinit variable at the
bottom of the sample config file:
http://surfer.nmr.mgh.harvard.edu/fswiki/dmrirc

I'm hoping to make this happen automatically soon!

Best,

a.y

On Tue, 23 Aug 2016, Newbold, Dillan wrote:


Dear Anastasia,

I’ve been looking at a lot of Tracula path.pd files and I’ve found that
some probability distributions are only a single voxel wide, similar to
the path.map file. The few none-zero voxels in these path.pd files have
very high probability values. When an isosurface is generated for these
tracts, it looks like a short thin blob somewhere in the usual tract
distribution. I’ve seen descriptions in the archives of similar “short
thin tracts,” but, from what I have seen, no one has offered a satisfying
explanation for why these occur.

What I think is happening in these tracts is that a maximum-probability
(or local maximum) path is found during a burn-in iteration and all
following perturbations of that path are rejected. Since the probability
value in the path.pd is equal to the number of sample paths intersecting
that voxel, finding a local maximum early on results in a small number of
very high-probability voxels. Consistent with this explanation, I’ve
found that this issue occurs more frequently when nburnin is set to 1000
(default = 200). A similar issue can occur if a local maximum is found
early during the sample iterations, and this results in a path.pd file
containing a small number of voxels with very high values surrounded by a
larger area of low-value voxels. When a 20% threshold is applied, the
result is the same as when a local maximum occurs during a burn-in
iteration.

Does my understanding of this issue seem correct?

None of this would be a problem if my only aim were to find the single
path with the maximum a posteriori probability, but I’m concerned that
the average and weighted_average sats for these tracts will be less
accurate. Since these distributions include small fractions of the number
of voxels included in most tract distributions, is it likely that the
average and weighted_average stats from these narrow distributions are
less representative of the whole tract and more subject to random noise?

Given these concerns, what type of overall path statistics do you think
is most descriptive of a tract? Also, do you feel that higher nburnin and
nsample values should lead to superior results? I would have thought this
to be the case, but now it seems to me that setting either of these
values too high will result in narrow probability distributions and bad
statistics.

Thank you,
Dillan

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer






The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this 

Re: [Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Harms, Michael

Hi Anastasia,
My interpretation of the “reinit” parameter is that it is for situations
where a narrow probability distribution is assumed to be incorrect.  But
how do you know whether it is indeed incorrect, or whether in fact the
true equilibrium distribution is (correctly) very narrow?

In particular, my understanding of MCMC is that higher burn-in, and more
sampling iterations are only a “good” thing.  i.e., If we have the time
and compute resources, we shouldn’t hesitate to increase them from their
defaults, to help to make sure we are capturing the true equilibrium
distribution.  So, if increasing the nburnin and nsample values makes it
more likely to find spatially narrow tract distributions, isn’t that a
sign that the true distribution should indeed be narrow?

thanks,
-MH

--
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu




On 8/23/16, 4:44 PM, "freesurfer-boun...@nmr.mgh.harvard.edu on behalf of
Anastasia Yendiki"  wrote:


Hi Dillan - There's a work-around for this, see the reinit variable at the
bottom of the sample config file:
 http://surfer.nmr.mgh.harvard.edu/fswiki/dmrirc

I'm hoping to make this happen automatically soon!

Best,

a.y

On Tue, 23 Aug 2016, Newbold, Dillan wrote:

> Dear Anastasia,
>
> I’ve been looking at a lot of Tracula path.pd files and I’ve found that
>some probability distributions are only a single voxel wide, similar to
>the path.map file. The few none-zero voxels in these path.pd files have
>very high probability values. When an isosurface is generated for these
>tracts, it looks like a short thin blob somewhere in the usual tract
>distribution. I’ve seen descriptions in the archives of similar “short
>thin tracts,” but, from what I have seen, no one has offered a satisfying
>explanation for why these occur.
>
> What I think is happening in these tracts is that a maximum-probability
>(or local maximum) path is found during a burn-in iteration and all
>following perturbations of that path are rejected. Since the probability
>value in the path.pd is equal to the number of sample paths intersecting
>that voxel, finding a local maximum early on results in a small number of
>very high-probability voxels. Consistent with this explanation, I’ve
>found that this issue occurs more frequently when nburnin is set to 1000
>(default = 200). A similar issue can occur if a local maximum is found
>early during the sample iterations, and this results in a path.pd file
>containing a small number of voxels with very high values surrounded by a
>larger area of low-value voxels. When a 20% threshold is applied, the
>result is the same as when a local maximum occurs during a burn-in
>iteration.
>
> Does my understanding of this issue seem correct?
>
> None of this would be a problem if my only aim were to find the single
>path with the maximum a posteriori probability, but I’m concerned that
>the average and weighted_average sats for these tracts will be less
>accurate. Since these distributions include small fractions of the number
>of voxels included in most tract distributions, is it likely that the
>average and weighted_average stats from these narrow distributions are
>less representative of the whole tract and more subject to random noise?
>
> Given these concerns, what type of overall path statistics do you think
>is most descriptive of a tract? Also, do you feel that higher nburnin and
>nsample values should lead to superior results? I would have thought this
>to be the case, but now it seems to me that setting either of these
>values too high will result in narrow probability distributions and bad
>statistics.
>
> Thank you,
> Dillan
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the 

Re: [Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Anastasia Yendiki


Hi Dillan - There's a work-around for this, see the reinit variable at the 
bottom of the sample config file:

http://surfer.nmr.mgh.harvard.edu/fswiki/dmrirc

I'm hoping to make this happen automatically soon!

Best,

a.y

On Tue, 23 Aug 2016, Newbold, Dillan wrote:


Dear Anastasia,

I’ve been looking at a lot of Tracula path.pd files and I’ve found that some 
probability distributions are only a single voxel wide, similar to the path.map 
file. The few none-zero voxels in these path.pd files have very high 
probability values. When an isosurface is generated for these tracts, it looks 
like a short thin blob somewhere in the usual tract distribution. I’ve seen 
descriptions in the archives of similar “short thin tracts,” but, from what I 
have seen, no one has offered a satisfying explanation for why these occur.

What I think is happening in these tracts is that a maximum-probability (or 
local maximum) path is found during a burn-in iteration and all following 
perturbations of that path are rejected. Since the probability value in the 
path.pd is equal to the number of sample paths intersecting that voxel, finding 
a local maximum early on results in a small number of very high-probability 
voxels. Consistent with this explanation, I’ve found that this issue occurs 
more frequently when nburnin is set to 1000 (default = 200). A similar issue 
can occur if a local maximum is found early during the sample iterations, and 
this results in a path.pd file containing a small number of voxels with very 
high values surrounded by a larger area of low-value voxels. When a 20% 
threshold is applied, the result is the same as when a local maximum occurs 
during a burn-in iteration.

Does my understanding of this issue seem correct?

None of this would be a problem if my only aim were to find the single path 
with the maximum a posteriori probability, but I’m concerned that the average 
and weighted_average sats for these tracts will be less accurate. Since these 
distributions include small fractions of the number of voxels included in most 
tract distributions, is it likely that the average and weighted_average stats 
from these narrow distributions are less representative of the whole tract and 
more subject to random noise?

Given these concerns, what type of overall path statistics do you think is most 
descriptive of a tract? Also, do you feel that higher nburnin and nsample 
values should lead to superior results? I would have thought this to be the 
case, but now it seems to me that setting either of these values too high will 
result in narrow probability distributions and bad statistics.

Thank you,
Dillan

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] TRACULA: narrow probability distributions

2016-08-23 Thread Newbold, Dillan
Dear Anastasia,

I’ve been looking at a lot of Tracula path.pd files and I’ve found that some 
probability distributions are only a single voxel wide, similar to the path.map 
file. The few none-zero voxels in these path.pd files have very high 
probability values. When an isosurface is generated for these tracts, it looks 
like a short thin blob somewhere in the usual tract distribution. I’ve seen 
descriptions in the archives of similar “short thin tracts,” but, from what I 
have seen, no one has offered a satisfying explanation for why these occur.

What I think is happening in these tracts is that a maximum-probability (or 
local maximum) path is found during a burn-in iteration and all following 
perturbations of that path are rejected. Since the probability value in the 
path.pd is equal to the number of sample paths intersecting that voxel, finding 
a local maximum early on results in a small number of very high-probability 
voxels. Consistent with this explanation, I’ve found that this issue occurs 
more frequently when nburnin is set to 1000 (default = 200). A similar issue 
can occur if a local maximum is found early during the sample iterations, and 
this results in a path.pd file containing a small number of voxels with very 
high values surrounded by a larger area of low-value voxels. When a 20% 
threshold is applied, the result is the same as when a local maximum occurs 
during a burn-in iteration.

Does my understanding of this issue seem correct?

None of this would be a problem if my only aim were to find the single path 
with the maximum a posteriori probability, but I’m concerned that the average 
and weighted_average sats for these tracts will be less accurate. Since these 
distributions include small fractions of the number of voxels included in most 
tract distributions, is it likely that the average and weighted_average stats 
from these narrow distributions are less representative of the whole tract and 
more subject to random noise?

Given these concerns, what type of overall path statistics do you think is most 
descriptive of a tract? Also, do you feel that higher nburnin and nsample 
values should lead to superior results? I would have thought this to be the 
case, but now it seems to me that setting either of these values too high will 
result in narrow probability distributions and bad statistics.

Thank you,
Dillan

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] Tracula ROI analysis CVS vs MNI reg

2016-08-23 Thread Anastasia Yendiki

Hi Yoon - The wmparc and aseg ROIs are in the subject's native T1 space, 
so you don't need to register across subjects to get ROI-based diffusion 
measures. You just need to register between the subject's own T1 and DWIs 
- the recommended method for that is bbregister. You can find more in the 
multimodal tutorial on the freesurfer wiki.

Best,
a.y

On Tue, 23 Aug 2016, Chung, Yoonho wrote:

> 
> Hi Anastasia,
> 
> 
> Is there a preferable method between CVS vs MNI registration when extracting 
> diffusion mesures from ROIs (white matter, ASEG etc)?
> 
> 
> Best,
> 
> Yoon
> 
> 
> 
>
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.



Re: [Freesurfer] TRACULA - difference in tracts using MNI vs CVS

2016-08-23 Thread Anastasia Yendiki


Hi Elijah - Does the directory in the error message exist?

a.y

On Fri, 19 Aug 2016, Elijah Mak wrote:


Hi Anastasia,
The error message occurs for other control points too, but it does not kill the 
-prior stage. 

I have attached the full trac-all.log file.

This error message is also found in other subjects where I am struggling to get 
a good initialisation.

The ERROR: fio_pushd:subject/dmri/xfms/cvs/final_CVSmorph_tocvs_avg35/mri

Thank you!

Best Wishes,
Elijah
--

Elijah Mak

PhD Candidate | Psychiatry

University of Cambridge

Trinity College, Cambridge, CB2 1TQ



___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] PET surface sampling

2016-08-23 Thread Jonathan DuBois
Hi, 

I’m trying to use the PETsurfer Pipeline to perform BP analysis, but I’m not 
sure at which point in the pipeline to sample the data to the surface and what 
registration to use? 

I tried running the pipeline using dynamic kinetic modeling as described here: 
http://surfer.nmr.mgh.harvard.edu/fswiki/PetSurfer 
, and then used the 
following command to try and sample the BP map the surface: 
mri_vol2surf --hemi lh --projfrac .5 --out_type mgh --cortex --mov 
mrtm2/bp.nii.gz --reg aux/bbpet2anat.lta --o lh.bp_pf5.mgh

But got the following error: 
ERROR: source volume is neither source nor target of the registration

I also tried sampling the mrtm2/bp.nii.gz to the orig.mgz volume space and then 
to the surface using: 
mri_vol2vol --mov mrtm2/bp.nii.gz --reg aux/bbpet2anat.lta --o 
mrtm2/bp_2anat.nii.gz --targ $SUBJECTS_DIR/subject1/mri/orig.mgz; mri_vol2surf 
--hemi lh --projfrac .5 --out_type mgh --cortex --mov mrtm2/bp_2anat.nii.gz 
--regheader subject1 --o lh.bp_pf5.mgh

This worked, although the surface file did not have any values…

Could you please let me know what is the best way to sample the data to the 
surface, and if it should be down prior to mri_glmfit or after. 

Thanks 
Jonathan

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] GLM surface based cortical thickness analysis vs aparcstat table cortical thickness

2016-08-23 Thread miracooloz
 Hello freesurfer experts,  What's the difference between GLM surface based cortical thickness analysis and the cortical thickness analysis performed when using regions extracted from aparcstat table? Best, Paul Sent from my BlackBerry 10 smartphone.
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] Tracula ROI analysis CVS vs MNI reg

2016-08-23 Thread Chung, Yoonho
Hi Anastasia,


Is there a preferable method between CVS vs MNI registration when extracting 
diffusion mesures from ROIs (white matter, ASEG etc)?


Best,

Yoon

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] permutation/monte carlo

2016-08-23 Thread Douglas N Greve


On 08/23/2016 03:18 AM, maaike rive wrote:
>
> Dear Douglas,
>
>
> Sorry to bother you with this,  but after reading the thread about the 
> Eklund papers  I'm getting confused about what I actually did when 
> correcting for multiple comparisons and I'd also like to know whether 
> I'm on the safe side.
>
>
> I used pre-cashed Monte Carlo simulation with a cluster finding 
> threshold of 0.05 (at least so I thought):
>
> mri_glmfit -sim - -glmdir - -cache 1.3 abs - - cache-dir - - 
> cwpvalthresh 0.01
>
>
> Am I correct?
>
This command line will use a voxel-wise threshold of p<.05 
(-log10(.05)=1.3) and will report clusters that are significant at p<.01 
(rather than the usual p<.05).
>
> And am I also correct that the FDR may be about 13% instead of 5% when 
> doing this?
>
It is hard to say without knowing what the smoothness level is. For my 
simulations, it was aabout 13%, but this was also thickness data. As you 
point out the lGI is much smoother.
>
> (I smoothed thickness and surface area data with FWMH 10mm, but not 
> the lGI data, because they were already very smooth and pre-cashed 
> simulation didn't work after additional smoothing).
>
>
> Thanks, Maaike
>

-- 
Douglas N. Greve, Ph.D.
MGH-NMR Center
gr...@nmr.mgh.harvard.edu
Phone Number: 617-724-2358
Fax: 617-726-7422

Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
FileDrop: https://gate.nmr.mgh.harvard.edu/filedrop2
www.nmr.mgh.harvard.edu/facility/filedrop/index.html
Outgoing: ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.



Re: [Freesurfer] Recon-all Correcting Defect 0

2016-08-23 Thread Douglas N Greve
See the cerebellum section here

http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Troubleshooting


On 08/23/2016 06:10 AM, Heidi Foo wrote:
> Hi Douglas,
>
> Yes it does seem like large parts of the cerebellum have been removed. 
> Is there anything I could do to get around this problem?
>
> Thanks.
>
> Regards,
> Heidi
>
> On Tuesday, 23 August 2016, Douglas N Greve  > wrote:
>
> It looks like there is a very large defect. Check the wm.mgz file
> to see
> if the cerebellum has been removed.
>
>
> On 08/21/2016 08:46 AM, Heidi Foo wrote:
> > Dear FreeSurfer team,
> >
> > I apologize for sending this email again. I am currently running
> > recon-all for my subjects but for 2 of them, it stops at XL
> correcting
> > defect. Do you have any ideas how I can deal with this problem?
> > Attached the log file for your reference.
> >
> > Thanks.
> >
> > Regards,
> > Heidi
> >
> >
> > ___
> > Freesurfer mailing list
> > Freesurfer@nmr.mgh.harvard.edu 
> > https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
> 
>
> --
> Douglas N. Greve, Ph.D.
> MGH-NMR Center
> gr...@nmr.mgh.harvard.edu 
> Phone Number: 617-724-2358
> Fax: 617-726-7422
>
> Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
> 
> FileDrop: https://gate.nmr.mgh.harvard.edu/filedrop2
> 
> www.nmr.mgh.harvard.edu/facility/filedrop/index.html
> 
> Outgoing:
> ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/
> 
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu 
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
> 
>
>
> The information in this e-mail is intended only for the person to
> whom it is
> addressed. If you believe this e-mail was sent to you in error and
> the e-mail
> contains patient information, please contact the Partners
> Compliance HelpLine at
> http://www.partners.org/complianceline
>  . If the e-mail was sent
> to you in error
> but does not contain patient information, please contact the
> sender and properly
> dispose of the e-mail.
>
>
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

-- 
Douglas N. Greve, Ph.D.
MGH-NMR Center
gr...@nmr.mgh.harvard.edu
Phone Number: 617-724-2358
Fax: 617-726-7422

Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
FileDrop: https://gate.nmr.mgh.harvard.edu/filedrop2
www.nmr.mgh.harvard.edu/facility/filedrop/index.html
Outgoing: ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


Re: [Freesurfer] sig.sum files shows more clusters than sig.mgh image

2016-08-23 Thread Douglas N Greve
Can you send the command line, the sum file, and a pic of the surface?


On 08/23/2016 04:39 AM, Oana Georgiana Rus wrote:
>
> Dear Freesurfer experts,
>
> I have run an analysis on GM Volume in Freesurfer.
>
> The interested contrast was patients vs. controls. (/command mri_glimfit/)
>
> I then ran /mri_surfcluster/ command to get a text file with the 
> significant clusters FDRcorrected and their peak coordinatesn- 
> resulting in a *sig.mgh image* and a *sig.sum file*.
>
> *After looking at the sig.mgh image with tksurfer and searching for 
> the significant clusters in the sig.sum files, I realized that one 
> cluster which is pretty big is missing in the sig.mgh image but is 
> present in the sig.sum file.*
>
> *Do you know why this could happen?Why is the image incomplete?*
>
> *Anybody encounterd something similar?*
>
> Because in the next step I want to extract the values of the 
> significant clusters and correlate them with clinical scores, it would 
> be good if thenumber of clusters in the result image and the result 
> file coincide.
>
> Thanks in advance for your help.
>
> Best,
>
> Georgiana
>
>
>
>
> -- 
> Oana Georgiana Rus
> PhD Student
> Neuroimaging Center TUM-NIC
> Klinikum rechts der Isar
> Technische Universität München
> Einsteinstr.1
> 81675 München
> Raum 5.8
>
> Tel. 089 4140 7971
>
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

-- 
Douglas N. Greve, Ph.D.
MGH-NMR Center
gr...@nmr.mgh.harvard.edu
Phone Number: 617-724-2358
Fax: 617-726-7422

Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
FileDrop: https://gate.nmr.mgh.harvard.edu/filedrop2
www.nmr.mgh.harvard.edu/facility/filedrop/index.html
Outgoing: ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.



Re: [Freesurfer] qcache for qdec longitudinal analysis

2016-08-23 Thread Nick Corriveau Lecavalier
Thank you very much. I will be looking forward to this.

Nick Corriveau-Lecavalier
?tudiant au Ph.D. recherche/intervention, option neuropsychologie clinique
Coordonateur ? la recherche, Association ?tudiante des cycles sup?rieurs en 
psychologie de l'Universit? de Montr?al (A?CSPUM)
Universit? de Montr?al
Centre de recherche de l'Institut universitaire de g?riatrie de Montr?al, 
Bureau M7819

On Aug 23, 2016, at 4:57 AM, Martin Reuter 
> wrote:

Hi Nick,

when using the 2 stage model (mris_long slopes ) you do not need the qcache. In 
qdec you will not look at thickness, but at the percent change or the rate of 
thickness changes as computed by long_slopes. For that to work, you need to 
create/modify the .qdecrc file so that qdec finds the files in the base. 
Furthermore, you need to pass a "cross sectional" qdec table, with one row per 
subject where the fsid contains the id of the the base.

This is all described on the wiki page.

Best, Martin

On Aug 22, 2016, at 10:24 PM, Nick Corriveau Lecavalier 
> wrote:

Hi Freesurfer team,

I am planning on following this page for my longitudinal group analysis using 
QDEC: http://www.freesurfer.net/fswiki/FsTutorial/QdecGroupAnalysis_freeview. 
However, it tells me to run -qcache for all of my subjects. Since I already ran 
the -long stream for all my subjects, do I need to rerun them all adding the 
-qcache command?

Also, I have also run the long_mris_slopes on all of my subjects, which 
produced smoothed thickness files for all of my subjects (which are all in the 
''base'' files). Does these files are okay for my QDEC analysis, or I really 
need to run all of my subjects all over again adding -qcache?

Thank you very much,

Nick Corriveau-Lecavalier, B.Sc. (Hons.)
?tudiant au Ph.D. recherche/intervention, option neuropsychologie clinique
Coordonateur ? la recherche, Association ?tudiante des cycles sup?rieurs en 
psychologie de l'Universit? de Montr?al (A?CSPUM)
Universit? de Montr?al
Centre de recherche de l'Institut universitaire de g?riatrie de Montr?al, 
bureau M7819
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] [Martinos-faculty] Open position as MRI physicist at Copenhagen University Hospital, Denmark (fwd)

2016-08-23 Thread Bruce Fischl
FYI


The Neurobiology Research Unit (NRU) at Copenhagen University Hospital,
Rigshospitalet has an opening for a MRI physicist position. The position
entails design, development, implementation and evaluation of novel
multimodal MRI acquisition protocols and image analysis techniques in
relation to our many exciting brain research projects which employ
Siemens 3 Tesla MRI. Also, ample opportunities will be given to conduct
own research projects.

This is a fully funded 2-year full time position with potential for
on-going renewal. Employment will start October 1st, 2016, or as soon as
possible thereafter. Salary will be according to the current agreement
and hence an annual gross salary of minimum DDK 460,000 (~USD 70,000)
can be expected. A special reduced tax scheme is offered for researchers
recruited abroad, see https://www.workindenmark.dk/Working-in-DK/Tax.
Primary work place will be Neurobiology Research Unit, Rigshospitalet.
Close collaboration will take place with our national collaborators at
the Department of Clinical Physiology and the Department of Diagnostic
Radiology at Rigshospitalet as well as with our international
collaborators.

For further information and application, see:
https://candidate.hr-manager.net/ApplicationInit.aspx?cid=342=201591=17200=4710


___
Martinos-faculty mailing list
martinos-facu...@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/martinos-faculty


___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.



Re: [Freesurfer] Recon-all Correcting Defect 0

2016-08-23 Thread Heidi Foo
Hi Douglas,

Yes it does seem like large parts of the cerebellum have been removed. Is
there anything I could do to get around this problem?

Thanks.

Regards,
Heidi

On Tuesday, 23 August 2016, Douglas N Greve 
wrote:

> It looks like there is a very large defect. Check the wm.mgz file to see
> if the cerebellum has been removed.
>
>
> On 08/21/2016 08:46 AM, Heidi Foo wrote:
> > Dear FreeSurfer team,
> >
> > I apologize for sending this email again. I am currently running
> > recon-all for my subjects but for 2 of them, it stops at XL correcting
> > defect. Do you have any ideas how I can deal with this problem?
> > Attached the log file for your reference.
> >
> > Thanks.
> >
> > Regards,
> > Heidi
> >
> >
> > ___
> > Freesurfer mailing list
> > Freesurfer@nmr.mgh.harvard.edu 
> > https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
> --
> Douglas N. Greve, Ph.D.
> MGH-NMR Center
> gr...@nmr.mgh.harvard.edu 
> Phone Number: 617-724-2358
> Fax: 617-726-7422
>
> Bugs: surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
> FileDrop: https://gate.nmr.mgh.harvard.edu/filedrop2
> www.nmr.mgh.harvard.edu/facility/filedrop/index.html
> Outgoing: ftp://surfer.nmr.mgh.harvard.edu/transfer/outgoing/flat/greve/
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu 
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] qcache for qdec longitudinal analysis

2016-08-23 Thread Martin Reuter
Hi Nick,

when using the 2 stage model (mris_long slopes ) you do not need the qcache. In 
qdec you will not look at thickness, but at the percent change or the rate of 
thickness changes as computed by long_slopes. For that to work, you need to 
create/modify the .qdecrc file so that qdec finds the files in the base. 
Furthermore, you need to pass a “cross sectional” qdec table, with one row per 
subject where the fsid contains the id of the the base.

This is all described on the wiki page. 

Best, Martin

> On Aug 22, 2016, at 10:24 PM, Nick Corriveau Lecavalier  
> wrote:
> 
> Hi Freesurfer team,
> 
> I am planning on following this page for my longitudinal group analysis using 
> QDEC: http://www.freesurfer.net/fswiki/FsTutorial/QdecGroupAnalysis_freeview 
> . 
> However, it tells me to run -qcache for all of my subjects. Since I already 
> ran the -long stream for all my subjects, do I need to rerun them all adding 
> the -qcache command? 
> 
> Also, I have also run the long_mris_slopes on all of my subjects, which 
> produced smoothed thickness files for all of my subjects (which are all in 
> the ''base'' files). Does these files are okay for my QDEC analysis, or I 
> really need to run all of my subjects all over again adding -qcache?
> 
> Thank you very much,
> 
> Nick Corriveau-Lecavalier, B.Sc. (Hons.)
> Étudiant au Ph.D. recherche/intervention, option neuropsychologie clinique
> Coordonateur à la recherche, Association étudiante des cycles supérieurs en 
> psychologie de l'Université de Montréal (AÉCSPUM)
> Université de Montréal
> Centre de recherche de l'Institut universitaire de gériatrie de Montréal, 
> bureau M7819
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu 
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer 
> 
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] sig.sum files shows more clusters than sig.mgh image

2016-08-23 Thread Oana Georgiana Rus

Dear Freesurfer experts,

I have run an analysis on GM Volume in Freesurfer.

The interested contrast was patients vs. controls. (/command mri_glimfit/)

I then ran /mri_surfcluster/ command to get a text file with the 
significant clusters FDRcorrected and their peak coordinatesn- resulting 
in a *sig.mgh image* and a *sig.sum file*.


*After looking at the sig.mgh image with tksurfer and searching for the 
significant clusters in the sig.sum files, I realized that one cluster 
which is pretty big is missing in the sig.mgh image but is present in 
the sig.sum file.*


*Do you know why this could happen?Why is the image incomplete?*

*Anybody encounterd something similar?*

Because in the next step I want to extract the values of the significant 
clusters and correlate them with clinical scores, it would be good if 
thenumber of clusters in the result image and the result file coincide.


Thanks in advance for your help.

Best,

Georgiana




--
Oana Georgiana Rus
PhD Student
Neuroimaging Center TUM-NIC
Klinikum rechts der Isar
Technische Universität München
Einsteinstr.1
81675 München
Raum 5.8

Tel. 089 4140 7971

___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [Freesurfer] voxel values in destrieux file

2016-08-23 Thread pierre deman
ok thanks a lot !
do you know when it will be fixed in the recon-all as well ?

Cheers,
Pierre



On Fri, Aug 19, 2016 at 7:45 PM, Douglas Greve 
wrote:

> The error is in recon-all. If you want to fix this one outside of recon-all 
> you can run
> cd $SUBJECTS_DIR/subject/mri
> mri_aparc2aseg --s subject --volmask --aseg aseg.presurf.hypos --a2009s
>
>
> On 8/19/16 11:08 AM, pierre deman wrote:
>
> Hello,
>
> I am looking at the destrieux atlas generated by recon-all (I use the
> freesurfer dev version), the file aparc.a2009+aseg.mgz), and sometimes the
> value of a voxel doesn't exist in the FreesurferColorLut. I have quite a
> lot of values between 2036 and 2100 (2038, 2074, 2075 for example) and I
> don't find these values in the FreeSurferColorLut.
>
> Is that normal ? What do they correspond at ?
>
> Regards,
> Pierre
>
>
> --
> DEMAN Pierre
> Mobile : +33 7 82 57 80 94
>
>
> ___
> Freesurfer mailing 
> listfreesur...@nmr.mgh.harvard.eduhttps://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
>
> ___
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>


-- 
DEMAN Pierre
Mobile : +33 7 82 57 80 94
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


[Freesurfer] permutation/monte carlo

2016-08-23 Thread maaike rive
Dear Douglas,


Sorry to bother you with this,  but after reading the thread about the Eklund 
papers  I'm getting confused about what I actually did when correcting for 
multiple comparisons and I'd also like to know whether I'm on the safe side.


I used pre-cashed Monte Carlo simulation with a cluster finding threshold of 
0.05 (at least so I thought):
mri_glmfit -sim - -glmdir- -cache 1.3 abs  - - cache-dir   - - cwpvalthresh 
0.01

Am I correct? And am I also correct that the FDR may be about 13% instead of 5% 
when doing this? (I smoothed thickness and surface area data with FWMH 10mm, 
but not the lGI data, because they were already very smooth and pre-cashed 
simulation didn't work after additional smoothing).

Thanks, Maaike
___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.