Re: [HCP-Users] diffusion data merge pipeline

2017-07-19 Thread Glasser, Matthew
Don’t recommend splitting the diffusion preprocessing up.  eddy_cuda is fine.

Peace,

Matt.

From: Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Wednesday, July 19, 2017 at 7:24 PM
To: Matt Glasser <glass...@wustl.edu<mailto:glass...@wustl.edu>>
Cc: "Harms, Michael" <mha...@wustl.edu<mailto:mha...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

Don't recommend the parallel processing or the eddy_cuda?

On Tue, Jul 18, 2017 at 2:13 PM, Glasser, Matthew 
<glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:
We really don’t recommend you do that.  I would ask about the eddy_cuda on the 
FSL or neurodebian lists.

Peace,

Matt.

From: Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Tuesday, July 18, 2017 at 4:11 PM
To: "Harms, Michael" <mha...@wustl.edu<mailto:mha...@wustl.edu>>
Cc: Matt Glasser <glass...@wustl.edu<mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

Thanks for the reply.
I would like to reduce the computing time for DiffusionPreprocessing; feeding 
in both sets of dMRIs takes about 24.16 hours to run, while parallel processing 
takes 10.72 hours.
I've been trying to retrieve the eddy_cuda version, but I can't find it in the 
neurodebian packages. Where can I find pre-compiled binaries of eddy_cuda?
The platform I am using in my Docker is Ubuntu 14.04 LTS.

Thank you again,
Yeun

On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael 
<mha...@wustl.edu<mailto:mha...@wustl.edu>> wrote:

Hi,
Is there a particular reason that you can’t provide all the dMRI scans at once, 
and let the pipeline handle the merging for you?
If you process each dMRI run separately, then the individual runs will not be 
in optimal alignment.  (You would be relying on the registration of each run to 
the T1, rather than registering the dMRI directly to each other as part of 
‘eddy’).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173<tel:(314)%20747-6173>
St. Louis, MO  63110Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: 
<hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Monday, July 17, 2017 at 1:32 PM
To: "Glasser, Matthew" <glass...@wustl.edu<mailto:glass...@wustl.edu>>

Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

I am using the following function (and is looped through the pairs of unique 
sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a Docker 
container with the following python code. It is looped through the pairs of 
unique sets of gradient tables (i.e. loops twice for dir99 and dir98) and set 
to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing", 
partial(run_diffusion_processsing,

 posData=pos,

 negData=neg,

 path=args.output_dir,

 subject="sub-%s

Re: [HCP-Users] diffusion data merge pipeline

2017-07-19 Thread Yeun Kim
Don't recommend the parallel processing or the eddy_cuda?

On Tue, Jul 18, 2017 at 2:13 PM, Glasser, Matthew <glass...@wustl.edu>
wrote:

> We really don’t recommend you do that.  I would ask about the eddy_cuda on
> the FSL or neurodebian lists.
>
> Peace,
>
> Matt.
>
> From: Yeun Kim <yeun...@gmail.com>
> Date: Tuesday, July 18, 2017 at 4:11 PM
> To: "Harms, Michael" <mha...@wustl.edu>
> Cc: Matt Glasser <glass...@wustl.edu>, "hcp-users@humanconnectome.org" <
> hcp-users@humanconnectome.org>
> Subject: Re: [HCP-Users] diffusion data merge pipeline
>
> Thanks for the reply.
> I would like to reduce the computing time for DiffusionPreprocessing;
> feeding in both sets of dMRIs takes about 24.16 hours to run, while
> parallel processing takes 10.72 hours.
> I've been trying to retrieve the eddy_cuda version, but I can't find it in
> the neurodebian packages. Where can I find pre-compiled binaries of
> eddy_cuda?
> The platform I am using in my Docker is Ubuntu 14.04 LTS.
>
> Thank you again,
> Yeun
>
> On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael <mha...@wustl.edu> wrote:
>
>>
>> Hi,
>> Is there a particular reason that you can’t provide all the dMRI scans at
>> once, and let the pipeline handle the merging for you?
>> If you process each dMRI run separately, then the individual runs will
>> not be in optimal alignment.  (You would be relying on the registration of
>> each run to the T1, rather than registering the dMRI directly to each other
>> as part of ‘eddy’).
>>
>> cheers,
>> -MH
>>
>> --
>> Michael Harms, Ph.D.
>> ---
>> Conte Center for the Neuroscience of Mental Disorders
>> Washington University School of Medicine
>> Department of Psychiatry, Box 8134
>> 660 South Euclid Ave.Tel: 314-747-6173 <(314)%20747-6173>
>> St. Louis, MO  63110Email: mha...@wustl.edu
>>
>> From: <hcp-users-boun...@humanconnectome.org> on behalf of Yeun Kim <
>> yeun...@gmail.com>
>> Date: Monday, July 17, 2017 at 1:32 PM
>> To: "Glasser, Matthew" <glass...@wustl.edu>
>>
>> Cc: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>> Subject: Re: [HCP-Users] diffusion data merge pipeline
>>
>> I am using the following function (and is looped through the pairs of
>> unique sets of gradient tables (i.e. loops twice for dir99 and dir98):
>> ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
>>   --posData="{posData}" \
>>   --negData="{negData}"  \
>>   --path="{path}" \
>>   --subject="{subject}"  \
>>   --echospacing="{echospacing}"  \
>>   --PEdir={PEdir}  \
>>   --gdcoeffs="NONE"  \
>>   --dwiname="{dwiname}"  \
>>   --printcom=""'
>>
>> Where:
>> $posData = diffusion data in the positive direction
>> $negData = diffusion data in the negative direction
>> $path = output directory path
>> $echospacing = echospacing
>> $PEdir = 2
>> $dwiname = i.e. Diffusion_dir-98_run-01
>>
>>
>> FYI: I'm using HCPPipelines v3.17.
>>
>> -
>>
>> Technical details:
>>
>> I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a
>> Docker container with the following python code. It is looped through the
>> pairs of unique sets of gradient tables (i.e. loops twice for dir99 and
>> dir98) and set to process in parallel:
>>
>> dwi_stage_dict = OrderedDict([("DiffusionPreprocessing",
>> partial(run_diffusion_processsing,
>>
>>*posData*=pos,
>>
>>*negData*=neg,
>>
>>*path*=args.output_dir,
>>
>>*subject*="sub-%s" % subject_label,
>>
>>*echospacing*=echospacing,
>>
>>*PEdir*=PEdir,
>>
>>*gdcoeffs*="NONE",
>>
>>        *dwiname*=dwiname,
>>
>>*n_cpus*=args.n_cpus))])
>> for stage, stage_func in
>> dwi_stage_dict.iteritems():
>> if stage in args.stages:
>> Process(target=stage_func).start()
>>
>> On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew <glass...@wustl.edu>

Re: [HCP-Users] diffusion data merge pipeline

2017-07-18 Thread Glasser, Matthew
We really don’t recommend you do that.  I would ask about the eddy_cuda on the 
FSL or neurodebian lists.

Peace,

Matt.

From: Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Tuesday, July 18, 2017 at 4:11 PM
To: "Harms, Michael" <mha...@wustl.edu<mailto:mha...@wustl.edu>>
Cc: Matt Glasser <glass...@wustl.edu<mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

Thanks for the reply.
I would like to reduce the computing time for DiffusionPreprocessing; feeding 
in both sets of dMRIs takes about 24.16 hours to run, while parallel processing 
takes 10.72 hours.
I've been trying to retrieve the eddy_cuda version, but I can't find it in the 
neurodebian packages. Where can I find pre-compiled binaries of eddy_cuda?
The platform I am using in my Docker is Ubuntu 14.04 LTS.

Thank you again,
Yeun

On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael 
<mha...@wustl.edu<mailto:mha...@wustl.edu>> wrote:

Hi,
Is there a particular reason that you can’t provide all the dMRI scans at once, 
and let the pipeline handle the merging for you?
If you process each dMRI run separately, then the individual runs will not be 
in optimal alignment.  (You would be relying on the registration of each run to 
the T1, rather than registering the dMRI directly to each other as part of 
‘eddy’).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173<tel:(314)%20747-6173>
St. Louis, MO  63110Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: 
<hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Monday, July 17, 2017 at 1:32 PM
To: "Glasser, Matthew" <glass...@wustl.edu<mailto:glass...@wustl.edu>>

Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

I am using the following function (and is looped through the pairs of unique 
sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a Docker 
container with the following python code. It is looped through the pairs of 
unique sets of gradient tables (i.e. loops twice for dir99 and dir98) and set 
to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing", 
partial(run_diffusion_processsing,

 posData=pos,

 negData=neg,

 path=args.output_dir,

 subject="sub-%s" % subject_label,

 echospacing=echospacing,

 PEdir=PEdir,

 gdcoeffs="NONE",

 dwiname=dwiname,

 n_cpus=args.n_cpus))])
for stage, stage_func in dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2

Re: [HCP-Users] diffusion data merge pipeline

2017-07-18 Thread Yeun Kim
Thanks for the reply.
I would like to reduce the computing time for DiffusionPreprocessing;
feeding in both sets of dMRIs takes about 24.16 hours to run, while
parallel processing takes 10.72 hours.
I've been trying to retrieve the eddy_cuda version, but I can't find it in
the neurodebian packages. Where can I find pre-compiled binaries of
eddy_cuda?
The platform I am using in my Docker is Ubuntu 14.04 LTS.

Thank you again,
Yeun

On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael <mha...@wustl.edu> wrote:

>
> Hi,
> Is there a particular reason that you can’t provide all the dMRI scans at
> once, and let the pipeline handle the merging for you?
> If you process each dMRI run separately, then the individual runs will not
> be in optimal alignment.  (You would be relying on the registration of each
> run to the T1, rather than registering the dMRI directly to each other as
> part of ‘eddy’).
>
> cheers,
> -MH
>
> --
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave. Tel: 314-747-6173 <(314)%20747-6173>
> St. Louis, MO  63110 Email: mha...@wustl.edu
>
> From: <hcp-users-boun...@humanconnectome.org> on behalf of Yeun Kim <
> yeun...@gmail.com>
> Date: Monday, July 17, 2017 at 1:32 PM
> To: "Glasser, Matthew" <glass...@wustl.edu>
>
> Cc: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
> Subject: Re: [HCP-Users] diffusion data merge pipeline
>
> I am using the following function (and is looped through the pairs of
> unique sets of gradient tables (i.e. loops twice for dir99 and dir98):
> ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
>   --posData="{posData}" \
>   --negData="{negData}"  \
>   --path="{path}" \
>   --subject="{subject}"  \
>   --echospacing="{echospacing}"  \
>   --PEdir={PEdir}  \
>   --gdcoeffs="NONE"  \
>   --dwiname="{dwiname}"  \
>   --printcom=""'
>
> Where:
> $posData = diffusion data in the positive direction
> $negData = diffusion data in the negative direction
> $path = output directory path
> $echospacing = echospacing
> $PEdir = 2
> $dwiname = i.e. Diffusion_dir-98_run-01
>
>
> FYI: I'm using HCPPipelines v3.17.
>
> -
>
> Technical details:
>
> I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a
> Docker container with the following python code. It is looped through the
> pairs of unique sets of gradient tables (i.e. loops twice for dir99 and
> dir98) and set to process in parallel:
>
> dwi_stage_dict = OrderedDict([("DiffusionPreprocessing",
> partial(run_diffusion_processsing,
>
>*posData*=pos,
>
>*negData*=neg,
>
>*path*=args.output_dir,
>
>*subject*="sub-%s" % subject_label,
>
>*echospacing*=echospacing,
>
>*PEdir*=PEdir,
>
>*gdcoeffs*="NONE",
>
>*dwiname*=dwiname,
>
>*n_cpus*=args.n_cpus))])
> for stage, stage_func in
> dwi_stage_dict.iteritems():
> if stage in args.stages:
> Process(target=stage_func).start()
>
> On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew <glass...@wustl.edu>
> wrote:
>
>> The pipeline is capable of doing the merge for you if you want.  Can you
>> post how you called the diffusion pipeline?
>>
>> Peace,
>>
>> Matt.
>>
>> From: Yeun Kim <yeun...@gmail.com>
>> Date: Monday, July 17, 2017 at 1:12 PM
>> To: Matt Glasser <glass...@wustl.edu>
>> Cc: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>> Subject: Re: [HCP-Users] diffusion data merge pipeline
>>
>> When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific
>> to the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent
>> files from being overwritten.
>> I end up with:
>> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
>> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz
>>
>> I would like to combine the two data.nii.gz files.
>>
>> On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew <glass...@wustl.edu>
>> 

Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Harms, Michael

Hi,
Is there a particular reason that you can’t provide all the dMRI scans at once, 
and let the pipeline handle the merging for you?
If you process each dMRI run separately, then the individual runs will not be 
in optimal alignment.  (You would be relying on the registration of each run to 
the T1, rather than registering the dMRI directly to each other as part of 
‘eddy’).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
<hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Monday, July 17, 2017 at 1:32 PM
To: "Glasser, Matthew" <glass...@wustl.edu<mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

I am using the following function (and is looped through the pairs of unique 
sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a Docker 
container with the following python code. It is looped through the pairs of 
unique sets of gradient tables (i.e. loops twice for dir99 and dir98) and set 
to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing", 
partial(run_diffusion_processsing,

 posData=pos,

 negData=neg,

 path=args.output_dir,

 subject="sub-%s" % subject_label,

 echospacing=echospacing,

 PEdir=PEdir,

 gdcoeffs="NONE",

 dwiname=dwiname,

 n_cpus=args.n_cpus))])
for stage, stage_func in dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew 
<glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:
The pipeline is capable of doing the merge for you if you want.  Can you post 
how you called the diffusion pipeline?

Peace,

Matt.

From: Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Monday, July 17, 2017 at 1:12 PM
To: Matt Glasser <glass...@wustl.edu<mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] diffusion data merge pipeline

When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific to the 
diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent files from 
being overwritten.
I end up with:
${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz

I would like to combine the two data.nii.gz files.

On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew 
<glass...@wustl.edu<mailto:glass...@wustl.edu>> wrote:
Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.

Peace,


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Yeun Kim
I am using the following function (and is looped through the pairs of
unique sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a
Docker container with the following python code. It is looped through the
pairs of unique sets of gradient tables (i.e. loops twice for dir99 and
dir98) and set to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing",
partial(run_diffusion_processsing,

 *posData*=pos,

 *negData*=neg,

 *path*=args.output_dir,

 *subject*="sub-%s" % subject_label,

 *echospacing*=echospacing,

 *PEdir*=PEdir,

 *gdcoeffs*="NONE",

 *dwiname*=dwiname,

 *n_cpus*=args.n_cpus))])
for stage, stage_func in
dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew <glass...@wustl.edu>
wrote:

> The pipeline is capable of doing the merge for you if you want.  Can you
> post how you called the diffusion pipeline?
>
> Peace,
>
> Matt.
>
> From: Yeun Kim <yeun...@gmail.com>
> Date: Monday, July 17, 2017 at 1:12 PM
> To: Matt Glasser <glass...@wustl.edu>
> Cc: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
> Subject: Re: [HCP-Users] diffusion data merge pipeline
>
> When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific
> to the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent
> files from being overwritten.
> I end up with:
> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz
>
> I would like to combine the two data.nii.gz files.
>
> On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew <glass...@wustl.edu>
> wrote:
>
>> Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.
>>
>> Peace,
>>
>> Matt.
>>
>> From: <hcp-users-boun...@humanconnectome.org> on behalf of Yeun Kim <
>> yeun...@gmail.com>
>> Date: Monday, July 17, 2017 at 12:56 PM
>> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
>> Subject: [HCP-Users] diffusion data merge pipeline
>>
>> Hi,
>>
>> We have the diffusion scans:
>> dMRI_dir98_AP, dMRI_dir98_PA
>> dMRI_dir99_AP, dMRI_dir99_PA
>>
>> in which there is a pair of phase encoding directions (AP,PA) and two
>> sets of different diffusion weighting directions (dir98 and dir99).
>>
>> After running the DiffusionPreprocessing module of the HCP minimal
>> preprocessing pipeline, I would like to merge the processed dMRI_dir98 and
>> dMRI_dir99 data. Do you have any suggestions on how to perform this step?
>> Also, are there any workflows developed by HCP for
>> post-DiffusionPreprocessing?
>>
>> Thank you,
>> Yeun
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Yeun Kim
When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific to
the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent files
from being overwritten.
I end up with:
${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz

I would like to combine the two data.nii.gz files.

On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew <glass...@wustl.edu>
wrote:

> Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.
>
> Peace,
>
> Matt.
>
> From: <hcp-users-boun...@humanconnectome.org> on behalf of Yeun Kim <
> yeun...@gmail.com>
> Date: Monday, July 17, 2017 at 12:56 PM
> To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
> Subject: [HCP-Users] diffusion data merge pipeline
>
> Hi,
>
> We have the diffusion scans:
> dMRI_dir98_AP, dMRI_dir98_PA
> dMRI_dir99_AP, dMRI_dir99_PA
>
> in which there is a pair of phase encoding directions (AP,PA) and two sets
> of different diffusion weighting directions (dir98 and dir99).
>
> After running the DiffusionPreprocessing module of the HCP minimal
> preprocessing pipeline, I would like to merge the processed dMRI_dir98 and
> dMRI_dir99 data. Do you have any suggestions on how to perform this step?
> Also, are there any workflows developed by HCP for
> post-DiffusionPreprocessing?
>
> Thank you,
> Yeun
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Glasser, Matthew
Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.

Peace,

Matt.

From: 
<hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Yeun Kim <yeun...@gmail.com<mailto:yeun...@gmail.com>>
Date: Monday, July 17, 2017 at 12:56 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
<hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] diffusion data merge pipeline

Hi,

We have the diffusion scans:
dMRI_dir98_AP, dMRI_dir98_PA
dMRI_dir99_AP, dMRI_dir99_PA

in which there is a pair of phase encoding directions (AP,PA) and two sets of 
different diffusion weighting directions (dir98 and dir99).

After running the DiffusionPreprocessing module of the HCP minimal 
preprocessing pipeline, I would like to merge the processed dMRI_dir98 and 
dMRI_dir99 data. Do you have any suggestions on how to perform this step? Also, 
are there any workflows developed by HCP for post-DiffusionPreprocessing?

Thank you,
Yeun

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users