Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Georgios Michalareas
Yes On 5/25/2016 1:03 PM, Dev vasu wrote: Dear Sir, Are these requirements for RAM ? . Thanks Vasudev On 25 May 2016 at 12:58, Georgios Michalareas > wrote: Hi, the recommended memory for MEG

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Dev vasu
Dear Sir, Are these requirements for RAM ? . Thanks Vasudev On 25 May 2016 at 12:58, Georgios Michalareas < giorgos.michalar...@esi-frankfurt.de> wrote: > Hi, > > the recommended memory for MEG pipelines is: > > hcp_baddata 32 gb > hcp_icaclass 32 gb >

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Georgios Michalareas
Hi, the recommended memory for MEG pipelines is: hcp_baddata 32 gb hcp_icaclass 32 gb hcp_tmegpreproc 32gb hcp_eravg 32 gb hcp_tfavg 32 gb hcp_srcavglcmv 16gb hcp_srcavgdics 16gb hcp_tmegconnebasic 16 gb Best Giorgos

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-25 Thread Dev vasu
Dear Sir, How much working memory is needed to run the tasks in MEG Pipeline , Most often i am incurring following error *" Out of memory. Type HELP MEMORY for your options.Error in ft_read_cifti (line 362)Error in megconnectome (line 129) "* I have 14.5 GB as Linux Swap Space, and 3.9

Re: [HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-24 Thread Timothy B. Brown
You will then need to learn how to write a script to be submitted to the SLURM scheduler. I am not familiar with the SLURM scheduler, but from very briefly looking at the documentation that you supplied a link to, I would think that the general form of a script for the SLURM scheduler would be:

[HCP-Users] Setting up HCP-MR /MEG Pipelines in HPC Cluster

2016-05-24 Thread Dev vasu
Dear Sir, Currently i am running HCP Pipelines on a Standalone computer but i would like to set up the pipeline on a Linux cluster, if possible could you please provide me some details concerning procedures that i have to follow . Thanks Vasudev ___