Hi Lea,
sorry for the late reply, just in case this is still of interest:
1. It is possible to process all cross data with 5.3 and then base and
long with 6.0. As long as you do the same for all subjects it should be
fine, just don't mix across subjects. Also depending on the size if only
Hi Martin,
thank you so much for your detailed answer. I will see how we can implement
these modifications.
Just two quick follow up questions:
1) I have half of the data already processed with the CROSS step using FS
5.3.0. Is it possible to use this data when running the TEMPLATE and LONG
Could be all that is needed is to add this:
mycluster ='sbatch --mail-type=FAIL --mail-user= -N 1
--ntasks-per-node=10 --partition=work --export=ALL -o
/%(username)s/logs/job.%J.out- e
/%(username)/logs/job.%J.err "%(command)s"'
and comment the lines
# if queue is not None ...
# pbcmd =
hi lea,
in addition you could consider using the bids app for freesurfer:
https://github.com/BIDS-Apps/freesurfer
and then submit each subject's longitudinal pipeline as a separate slurm
job.
cheers,
satra
On Thu, Feb 22, 2018 at 10:43 AM, Martin Reuter wrote:
Hi Lea,
I wrote that script to simplify processing on our cluster which uses
qsub PBS for submission. I have never used SLURM so far.
There is two functions that you (or someone who knows this stuff) would
need to modify:
def submit
and
def wait_jobs
The submit procedure basically