Thanks to everybody, you are being very helpful. Much appreciation.
___
Marta Moreno-Ortega, Ph.D.
Postdoctoral Research Fellow
Division of Experimental Therapeutics
New York State Psychiatric Institute
Department of Psychiatry
Columbia University College of Physicians and Surgeons
You’ll want to make sure you’re using the latest version of the main HCP public release, rather than tutorial data. You can run wb_command -cifti-correlate to compute functional connectivity from the dense timeseries data. If you want to combine across
runs, you should demean the data first
Hi all,
I have run the tutorial data through the HCP pipelines (v.3.4.0), but I wish to
further process the resting state data. I am not sure how to proceed, however.
The preprocessed data are in two places
(Pipelines_ExampleData/100307/rfMRI_REST1_RL{LR}, and
You don’t need to rerun ICAFIX, the HCP has already done that for you in the cleaned packages. If you want to check over the classification results, you can download the extended packages.
Note I didn’t actually state an opinion for or against on global signal regression there, so if you
Fair enough. And the movement signal regression?
From: Glasser, Matthew [mailto:glass...@wusm.wustl.edu]
Sent: Thursday, February 05, 2015 11:31 AM
To: Hoptman, Matthew; 'hcp-users@humanconnectome.org'
Cc: Elam, Jennifer
Subject: Re: [HCP-Users] probably obvious question, but . . .
You don't
Hi,
I’m starting to use HCP Pipelines in the WashU CHPC cluster, and got a bit
confused on the way jobs are submitted to it, so I’d like to confirm a few
details about this.
As far as I could tell, since the cluster takes PBS jobs, and because fsl_sub
works only in Sun Grid Engines(?), the way
I contacted Malcolm at the CHPC to ask about this already, and he told me that
fsl_sub had been somewhat customized. Yet, when I tried to execute the scripts
without the —runlocal option, I got an error and the job was stopped.
I guess i’m trying to figure out if the —runlocal flag is the best
This might be a better question for the mailing list of this specific
cluster, but I believe the version of fsl_sub on the Wash U cluster still
works with PBS (I helped modify it to do this some years ago).
Peace,
Matt.
On 2/5/15, 11:48 AM, Ramirez San Martin, Carolina
fsl_sub should work with our cluster via PBS in almost all cases*
Perhaps the authors of the PreFreeSurferPipelineBatch.sh script could comment
on what the -runlocal flag is intended to do and whether it's been tested on
our cluster?
Malcolm
*it doesn't properly support submitting to the GPU
Hi Carolina,
In the example scripts provided with the HCP Pipelines, the --runlocal
flag is intended to be an indication that you do **not** want to submit
a job to a scheduler or grid engine (whether PBS or SGE). If you specify
the --runlocal command line option when invoking one of the example
Hi Tim,
that answers my question. I wondered if that '—runlocal’ option forced running
the program in the cluster's login node or if it would still accept PBS job
options. I’ll change the queuing_command variable then.
As a new user of Pipelines and the cluster this has been the only confusing
11 matches
Mail list logo