Re: [HCP-Users] HCP-Users Digest, Vol 42, Issue 42

2016-05-25 Thread Javeria Hashmi
tional connectome fingerprinting (Shams, Boshra)
   3. Functional connectome fingerprinting (Shams, Boshra)
   4. Re: Functional connectome fingerprinting (Harms, Michael)


--

Message: 1
Date: Wed, 25 May 2016 21:03:41 +0800 (CST)
From: "Xinyang Liu" <xinyang_ie...@163.com>
Subject: [HCP-Users] Fw:Welcome to the "HCP-Users" mailing list
To: hcp-users@humanconnectome.org
Message-ID:
<7c5121ef.13f94.154e803745e.coremail.xinyang_ie...@163.com>
Content-Type: text/plain; charset="gbk"

Dear Sir/Madam,


Hello! I apply for joining the "HCP-Users" mailing list. I would be very 
appreciative if I could have the honor to be added. Thank you very much! :)


Best regards,
Xinyang Liu
 Forwarding messages 
From: hcp-users-requ...@humanconnectome.org
Date: 2016-05-25 20:54:22
To:  xinyang_ie...@163.com
Subject: Welcome to the "HCP-Users" mailing list
Welcome to the HCP-Users@humanconnectome.org mailing list! PLEASE
NOTE: You must reply to this email to verify your subscription. For
security reasons, the list URL is no longer accessible to the public.

To post to this list, send your email to:

  hcp-users@humanconnectome.org

If you ever want to unsubscribe, visit the HCP list subscription page
at:

https://www.humanconnectome.org/contact/#subscribe

You must know your password to change your options (including changing
the password, itself) or to unsubscribe.  It is:

  digest
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20160525/12758879/attachment-0001.html

--

Message: 2
Date: Wed, 25 May 2016 09:57:12 +
From: "Shams, Boshra" <b.sh...@fz-juelich.de>
Subject: [HCP-Users] Functional connectome fingerprinting
To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
Message-ID:
<4d73639636efc94c82c2fb63ef4623f5e78...@mbx2010-e01.ad.fz-juelich.de>
Content-Type: text/plain; charset="iso-8859-1"

Dear all,

I am about to go through the implementation process of the Functional 
Connectome Fingerprinting recent research.
They mentioned that they used Q2 release of HCP which contains 142 subjects. 
However, what I found in HCP data set is that Q2 release has only 66 subjects.
I just guess that they combine Q1 and Q2 release together, Is it true?
In recent updated dataset, Q1 and Q2 are preprocessed via the same preprocess 
pipeline release, Is that also true?

Best,
Boshra




Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.humanconnectome.org/pipermail/hcp-users/attachments/20160525/599d4ddd/attachment-0001.html

--

Message: 3
Date: Wed, 25 May 2016 10:22:53 +
From: "Shams, Boshra" <b.sh...@fz-juelich.de>
Subject: [HCP-Users] Functional connectome fingerprinting
To: "hcp-users@humanconnectome.org" <hcp-users@humanconnectome.org>
Message-ID:
<4d73639636efc94c82c2fb63ef4623f5e78...@mbx2010-e01.ad.fz-juelich.de>
Content-Type: text/plain; charset="iso-8859-1"

Dear all,

I am about to go through the implementation process of the Functional 
Connectome Fingerprinting recent research.
They mentioned that they used Q2 release of HCP which contains 142 subjects. 
However, what I found in HCP data set is that Q2 release has only 66 subjects.
I just guess that they combine Q1 and Q2 release together, Is it true?
In recent updated dataset, Q1 and Q2 are preprocessed via the same preprocess 
pipeline release, Is that also true?

Best,
Boshra




Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr.

Re: [HCP-Users] Functional connectome fingerprinting

2016-05-25 Thread Harms, Michael







Yes, the “Q2 release” included subject originally released in “Q1”, so they probably combined the two in that paper.


If you are using subjects from the “S900” release, they have all been processed in the same manner.  It was only some of the very early releases for which some of the processing subsequently changed.  I posted a detailed message on that to the list not
 to long ago which you should be able to find.


cheers,
-MH




-- 
Michael Harms, Ph.D.

---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. 
Tel: 314-747-6173
St. Louis, MO  63110 
Email: mha...@wustl.edu







From:  on behalf of "Shams, Boshra" 
Date: Wednesday, May 25, 2016 at 5:22 AM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Functional connectome fingerprinting





Dear all,

I am about to go through the implementation process of the Functional Connectome Fingerprinting recent research.
They mentioned that they used Q2 release of HCP which contains 142 subjects. However, what I found in HCP data set is that Q2 release has only 66 subjects.
I just guess that they combine Q1 and Q2 release together, Is it true?
In recent updated dataset, Q1 and Q2 are preprocessed via the same preprocess pipeline release, Is that also true?

Best,
Boshra




Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt




___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users





 



The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended
 recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone
 or return mail.
___HCP-Users mailing listHCP-Users@humanconnectome.orghttp://lists.humanconnectome.org/mailman/listinfo/hcp-users




[HCP-Users] Fw:Welcome to the "HCP-Users" mailing list

2016-05-25 Thread Xinyang Liu
Dear Sir/Madam,


Hello! I apply for joining the "HCP-Users" mailing list. I would be very 
appreciative if I could have the honor to be added. Thank you very much! :)


Best regards,
Xinyang Liu
 Forwarding messages 
From: hcp-users-requ...@humanconnectome.org
Date: 2016-05-25 20:54:22
To:  xinyang_ie...@163.com
Subject: Welcome to the "HCP-Users" mailing list
Welcome to the HCP-Users@humanconnectome.org mailing list! PLEASE
NOTE: You must reply to this email to verify your subscription. For
security reasons, the list URL is no longer accessible to the public. 

To post to this list, send your email to:

  hcp-users@humanconnectome.org

If you ever want to unsubscribe, visit the HCP list subscription page
at:

https://www.humanconnectome.org/contact/#subscribe

You must know your password to change your options (including changing
the password, itself) or to unsubscribe.  It is:

  digest


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Georgios Michalareas

Yes


On 5/25/2016 1:03 PM, Dev vasu wrote:

Dear Sir,


Are these requirements for RAM ? .


Thanks
Vasudev

On 25 May 2016 at 12:58, Georgios Michalareas 
> wrote:


Hi,

the recommended memory for MEG pipelines is:

hcp_baddata  32 gb
hcp_icaclass   32 gb
hcp_tmegpreproc 32gb
hcp_eravg 32 gb
hcp_tfavg  32 gb
hcp_srcavglcmv 16gb
hcp_srcavgdics   16gb
hcp_tmegconnebasic 16 gb

Best

Giorgos








On 5/25/2016 12:02 PM, Dev vasu wrote:

Dear Sir,


How much working memory is needed to run the tasks in MEG
Pipeline , Most often i am incurring following error


*" Out of memory. Type HELP MEMORY for your options.
Error in ft_read_cifti (line 362)



Error in megconnectome (line 129) "

*
I have 14.5 GB as Linux Swap Space, and 3.9 GB RAM

*" vasudev@vasudev-OptiPlex-780:~$ grep MemTotal /proc/meminfo
MemTotal:3916512 kB "
*

Could you please let me know if it is sufficient to run the
pipelines ?.



Thanks
Vasudev


On 24 May 2016 at 20:34, Timothy B. Brown > wrote:

You will then need to learn how to write a script to be
submitted to the SLURM scheduler.
I am not familiar with the SLURM scheduler, but from very
briefly looking at the documentation that you supplied a link
to, I would think that the general form of a script for the
SLURM scheduler would be:
#!/bin/bash
#  ... SLURM Scheduler directives ... e.g. #SBATCH ...
telling the system such things as how much memory to expect
to use
# and how long you expect the job to take to run
#  ... initialization of the modules system ... e.g. source
/etc/profile.d/modules.sh
#  ... loading of the required software modules ... e.g.
module load fsl
... command to run the HCP Pipeline Script you want to run
(e.g. Structural Preprocessing, MEG processing, etc.) for the
subject and scans you want to process
Once you've written such a script, (for example named
myjob.cmd), it appears that you would submit the job using a
command like:
sbatch myjob.cmd
At the link that you provided, there is a section titled
"Introductory Articles and Tutorials by LRZ"  I would suggest
you follow the links provided in that section, read that
documentation, and submit any questions you have to the
service desk for the LRZ (a link to the service desk is also
on the page you supplied a link to.)
  Tim
On Tue, May 24, 2016, at 13:00, Dev vasu wrote:

Dear Sir,
I have the cluster available , following is the link to it 
: http://www.lrz.de/services/compute/linux-cluster/


Thanks
Vasudev



On 24 May 2016 at 19:43, Timothy B. Brown > wrote:

Do you already have a cluster set up and available, or
are you looking to set up the cluster?
If you are looking to set up a cluster, do you have the
group of machines that you want to use for the cluster
already available or are you thinking of setting up a
cluster "in the cloud" (e.g. a group of Amazon EC2
instances)?
  Tim
On Tue, May 24, 2016, at 12:18, Dev vasu wrote:

Dear Sir,
Currently i am running HCP Pipelines on a Standalone
computer but i would like to set up the pipeline on a
Linux cluster, if possible could you please provide me
some details concerning procedures that i have to follow .


Thanks
Vasudev

___
HCP-Users mailing list
HCP-Users@humanconnectome.org

http://lists.humanconnectome.org/mailman/listinfo/hcp-users


--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu 

The material in this message is private and may contain
Protected Healthcare Information (PHI).
If you are not the intended recipient, be advised that
any unauthorized use, disclosure, copying
or the taking of any action in reliance on the contents
of this information is strictly prohibited.
If you have received this email in error, please
immediately notify the sender via telephone or
  

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Dev vasu
Dear Sir,


Are these requirements for RAM ? .


Thanks
Vasudev

On 25 May 2016 at 12:58, Georgios Michalareas <
giorgos.michalar...@esi-frankfurt.de> wrote:

> Hi,
>
> the recommended memory for MEG pipelines is:
>
> hcp_baddata  32 gb
> hcp_icaclass   32 gb
> hcp_tmegpreproc 32gb
> hcp_eravg 32 gb
> hcp_tfavg  32 gb
> hcp_srcavglcmv 16gb
> hcp_srcavgdics   16gb
> hcp_tmegconnebasic 16 gb
>
> Best
>
> Giorgos
>
>
>
>
>
>
>
>
> On 5/25/2016 12:02 PM, Dev vasu wrote:
>
> Dear Sir,
>
>
> How much working memory is needed to run the tasks in MEG Pipeline , Most
> often i am incurring following error
>
>
>
>
>
>
>
>
>
> *" Out of memory. Type HELP MEMORY for your options. Error in
> ft_read_cifti (line 362) Error in megconnectome (line 129) " *
> I have 14.5 GB as Linux Swap Space, and 3.9 GB RAM
>
>
>
> *" vasudev@vasudev-OptiPlex-780:~$ grep MemTotal /proc/meminfo
> MemTotal:3916512 kB " *
>
> Could you please let me know if it is sufficient to run the pipelines ?.
>
>
>
> Thanks
> Vasudev
>
>
> On 24 May 2016 at 20:34, Timothy B. Brown  wrote:
>
>> You will then need to learn how to write a script to be submitted to the
>> SLURM scheduler.
>>
>> I am not familiar with the SLURM scheduler, but from very briefly looking
>> at the documentation that you supplied a link to, I would think that the
>> general form of a script for the SLURM scheduler would be:
>>
>> #!/bin/bash
>> #  ... SLURM Scheduler directives ... e.g. #SBATCH ... telling the system
>> such things as how much memory to expect to use
>> #  and how long you
>> expect the job to take to run
>> #  ... initialization of the modules system ... e.g. source
>> /etc/profile.d/modules.sh
>> #  ... loading of the required software modules ... e.g. module load fsl
>> ... command to run the HCP Pipeline Script you want to run (e.g.
>> Structural Preprocessing, MEG processing, etc.) for the subject and scans
>> you want to process
>>
>> Once you've written such a script, (for example named myjob.cmd), it
>> appears that you would submit the job using a command like:
>>
>> sbatch myjob.cmd
>>
>> At the link that you provided, there is a section titled "Introductory
>> Articles and Tutorials by LRZ"  I would suggest you follow the links
>> provided in that section, read that documentation, and submit any questions
>> you have to the service desk for the LRZ (a link to the service desk is
>> also on the page you supplied a link to.)
>>
>>   Tim
>>
>> On Tue, May 24, 2016, at 13:00, Dev vasu wrote:
>>
>> Dear Sir,
>>
>> I have the cluster available , following is the link to it  :
>> 
>> http://www.lrz.de/services/compute/linux-cluster/
>>
>>
>> Thanks
>> Vasudev
>>
>>
>>
>>
>>
>> On 24 May 2016 at 19:43, Timothy B. Brown < 
>> tbbr...@wustl.edu> wrote:
>>
>> Do you already have a cluster set up and available, or are you looking to
>> set up the cluster?
>>
>> If you are looking to set up a cluster, do you have the group of machines
>> that you want to use for the cluster already available or are you thinking
>> of setting up a cluster "in the cloud" (e.g. a group of Amazon EC2
>> instances)?
>>
>>   Tim
>>
>> On Tue, May 24, 2016, at 12:18, Dev vasu wrote:
>>
>> Dear Sir,
>>
>> Currently i am running HCP Pipelines on a Standalone computer but i would
>> like to set up the pipeline on a Linux cluster, if possible could you
>> please provide me some details concerning procedures that i have to follow .
>>
>>
>>
>> Thanks
>> Vasudev
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> 
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any unauthorized
>> use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify the
>> sender via telephone or
>> return mail.
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> 
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any unauthorized
>> use, disclosure, copying
>> or the taking of any action in reliance on the 

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Georgios Michalareas

Hi,

the recommended memory for MEG pipelines is:

hcp_baddata  32 gb
hcp_icaclass   32 gb
hcp_tmegpreproc 32gb
hcp_eravg 32 gb
hcp_tfavg  32 gb
hcp_srcavglcmv 16gb
hcp_srcavgdics   16gb
hcp_tmegconnebasic 16 gb

Best

Giorgos








On 5/25/2016 12:02 PM, Dev vasu wrote:

Dear Sir,


How much working memory is needed to run the tasks in MEG Pipeline , 
Most often i am incurring following error



*" Out of memory. Type HELP MEMORY for your options.
Error in ft_read_cifti (line 362)



Error in megconnectome (line 129) "

*
I have 14.5 GB as Linux Swap Space, and 3.9 GB RAM

*" vasudev@vasudev-OptiPlex-780:~$ grep MemTotal /proc/meminfo
MemTotal:3916512 kB "
*

Could you please let me know if it is sufficient to run the pipelines ?.



Thanks
Vasudev


On 24 May 2016 at 20:34, Timothy B. Brown > wrote:


You will then need to learn how to write a script to be submitted
to the SLURM scheduler.
I am not familiar with the SLURM scheduler, but from very briefly
looking at the documentation that you supplied a link to, I would
think that the general form of a script for the SLURM scheduler
would be:
#!/bin/bash
#  ... SLURM Scheduler directives ... e.g. #SBATCH ... telling the
system such things as how much memory to expect to use
# and how long you expect the job to take to run
#  ... initialization of the modules system ... e.g. source
/etc/profile.d/modules.sh
#  ... loading of the required software modules ... e.g. module
load fsl
... command to run the HCP Pipeline Script you want to run (e.g.
Structural Preprocessing, MEG processing, etc.) for the subject
and scans you want to process
Once you've written such a script, (for example named myjob.cmd),
it appears that you would submit the job using a command like:
sbatch myjob.cmd
At the link that you provided, there is a section titled
"Introductory Articles and Tutorials by LRZ"  I would suggest you
follow the links provided in that section, read that
documentation, and submit any questions you have to the service
desk for the LRZ (a link to the service desk is also on the page
you supplied a link to.)
  Tim
On Tue, May 24, 2016, at 13:00, Dev vasu wrote:

Dear Sir,
I have the cluster available , following is the link to it  :
http://www.lrz.de/services/compute/linux-cluster/

Thanks
Vasudev



On 24 May 2016 at 19:43, Timothy B. Brown > wrote:

Do you already have a cluster set up and available, or are
you looking to set up the cluster?
If you are looking to set up a cluster, do you have the group
of machines that you want to use for the cluster already
available or are you thinking of setting up a cluster "in the
cloud" (e.g. a group of Amazon EC2 instances)?
  Tim
On Tue, May 24, 2016, at 12:18, Dev vasu wrote:

Dear Sir,
Currently i am running HCP Pipelines on a Standalone
computer but i would like to set up the pipeline on a Linux
cluster, if possible could you please provide me some
details concerning procedures that i have to follow .


Thanks
Vasudev

___
HCP-Users mailing list
HCP-Users@humanconnectome.org

http://lists.humanconnectome.org/mailman/listinfo/hcp-users


--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu 

The material in this message is private and may contain
Protected Healthcare Information (PHI).
If you are not the intended recipient, be advised that any
unauthorized use, disclosure, copying
or the taking of any action in reliance on the contents of
this information is strictly prohibited.
If you have received this email in error, please immediately
notify the sender via telephone or
return mail.


--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu 

The material in this message is private and may contain Protected
Healthcare Information (PHI).
If you are not the intended recipient, be advised that any
unauthorized use, disclosure, copying
or the taking of any action in reliance on the contents of this
information is strictly prohibited.
If you have received this email in error, please immediately
notify the sender via telephone or
return 

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Dev vasu
Dear Sir,


How much working memory is needed to run the tasks in MEG Pipeline , Most
often i am incurring following error









*" Out of memory. Type HELP MEMORY for your options.Error in ft_read_cifti
(line 362)Error in megconnectome (line 129) "*
I have 14.5 GB as Linux Swap Space, and 3.9 GB RAM



*" vasudev@vasudev-OptiPlex-780:~$ grep MemTotal
/proc/meminfoMemTotal:3916512 kB "*

Could you please let me know if it is sufficient to run the pipelines ?.



Thanks
Vasudev


On 24 May 2016 at 20:34, Timothy B. Brown  wrote:

> You will then need to learn how to write a script to be submitted to the
> SLURM scheduler.
>
> I am not familiar with the SLURM scheduler, but from very briefly looking
> at the documentation that you supplied a link to, I would think that the
> general form of a script for the SLURM scheduler would be:
>
> #!/bin/bash
> #  ... SLURM Scheduler directives ... e.g. #SBATCH ... telling the system
> such things as how much memory to expect to use
> #  and how long you
> expect the job to take to run
> #  ... initialization of the modules system ... e.g. source
> /etc/profile.d/modules.sh
> #  ... loading of the required software modules ... e.g. module load fsl
> ... command to run the HCP Pipeline Script you want to run (e.g.
> Structural Preprocessing, MEG processing, etc.) for the subject and scans
> you want to process
>
> Once you've written such a script, (for example named myjob.cmd), it
> appears that you would submit the job using a command like:
>
> sbatch myjob.cmd
>
> At the link that you provided, there is a section titled "Introductory
> Articles and Tutorials by LRZ"  I would suggest you follow the links
> provided in that section, read that documentation, and submit any questions
> you have to the service desk for the LRZ (a link to the service desk is
> also on the page you supplied a link to.)
>
>   Tim
>
> On Tue, May 24, 2016, at 13:00, Dev vasu wrote:
>
> Dear Sir,
>
> I have the cluster available , following is the link to it  :
> http://www.lrz.de/services/compute/linux-cluster/
>
>
> Thanks
> Vasudev
>
>
>
>
>
> On 24 May 2016 at 19:43, Timothy B. Brown  wrote:
>
> Do you already have a cluster set up and available, or are you looking to
> set up the cluster?
>
> If you are looking to set up a cluster, do you have the group of machines
> that you want to use for the cluster already available or are you thinking
> of setting up a cluster "in the cloud" (e.g. a group of Amazon EC2
> instances)?
>
>   Tim
>
> On Tue, May 24, 2016, at 12:18, Dev vasu wrote:
>
> Dear Sir,
>
> Currently i am running HCP Pipelines on a Standalone computer but i would
> like to set up the pipeline on a Linux cluster, if possible could you
> please provide me some details concerning procedures that i have to follow .
>
>
>
> Thanks
> Vasudev
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
> --
> Timothy B. Brown
> Business & Technology Application Analyst III
> Pipeline Developer (Human Connectome Project)
> tbbrown(at)wustl.edu
> 
> The material in this message is private and may contain Protected
> Healthcare Information (PHI).
> If you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying
> or the taking of any action in reliance on the contents of this
> information is strictly prohibited.
> If you have received this email in error, please immediately notify the
> sender via telephone or
> return mail.
>
> --
> Timothy B. Brown
> Business & Technology Application Analyst III
> Pipeline Developer (Human Connectome Project)
> tbbrown(at)wustl.edu
> 
> The material in this message is private and may contain Protected
> Healthcare Information (PHI).
> If you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying
> or the taking of any action in reliance on the contents of this
> information is strictly prohibited.
> If you have received this email in error, please immediately notify the
> sender via telephone or
> return mail.
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users