Hello Leonor,
the log you've sent indicates that you're picking up pulsar from
/usr/local/lib.
That should not happen if you're running galaxy in a virtualenv.
Apart from that you did not mention if you able to submit slurm jobs from
the command line.
That is a prerequisite for launching jobs
Dear all,
we are struggling with the basics in our Galaxy/SLURM configuration.
- Galaxy is installed on a virtual machine that is physically
independent from our cluster, but on a shared filesystem that is also
mounted on the Cluster
- Our Cluster is running SLURM and has 'slurm-drmaa' (Poznan
It doesn't hurt to try this, but I don't think that will solve the problem.
Just to be sure, the basics are working? You can submit jobs via sbatch?
How did you compile/install slurm-drmaa ?
Also it looks like drmaa-python is being used from /usr/local/... .
Are you running galaxy in a
Hi Marius,
yes, we are using the one from Poznan. Should we give it a try with the
fork?
Best
Leonor
Leonor Palmeira | PhD
Associate Scientist
Department of Human Genetics
CHU de Liège | Domaine Universitaire du Sart-Tilman
4000 Liège | BELGIQUE
Tél: +32-4-366.91.41
Fax: +32-4-366.72.61
e-mail:
Dear all,
we are struggling with the Galaxy documentation to understand how our VM
(with our Galaxy instance running perfectly in local) should be
configured in order to be able to submit jobs to our SLURM Cluster.
We have a shared filesystem named /home/mass/GAL between the Cluster and
the VM.
Dear all,
we have setup a Galaxy instance on a virtual machine, and we want to be
able to submit jobs to our HPC system (SLURM).
Currently, we do not understand how to define that jobs will be sent to
the HPC cluster.
We have set :
export $DRMAA_LIBRARY_PATH=/var/lib/libdrmaa.so
This is our