Hi all,

So far we've been running our local Galaxy instance on
a single machine, but I would like to be able to offload
(some) jobs onto our local SGE cluster. I've been reading
https://bitbucket.org/galaxy/galaxy-central/wiki/Config/Cluster

Unfortunately in our setup the SGE cluster head node is
a different machine to the Galaxy server, and they do not
(currently) have a shared file system. Once on the cluster,
the head node and the compute nodes do have a shared
file system.

Therefore we will need some way of copying input data
from the Galaxy server to the cluster, running the job,
and once the job is done, copying the results back to the
Galaxy server.

The "Staged Method" on the wiki sounds relevant, but
appears to be for TORQUE only (via pbs_python), not
any of the other back ends (via DRMAA).

Have I overlooked anything on the "Cluster" wiki page?

Has anyone attempted anything similar, and could you
offer any guidance or tips?

Thanks,

Peter
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to