Even if you just have two servers, I would strongly recommend you
setup a cluster distributed resource manager (DRM) like SLURM, PBS, or
Condor and ensuring there exists a shared file system between Galaxy
and the node running the jobs. You wouldn't even need to use the CLI
job runner - you could just use the DRMAA job runner directly in most
cases if you configured one of these - which is what most people use
to run Galaxy jobs across machines on a cluster.

If you wish to send Galaxy jobs to a single machine without setting up
a DRM or if a shared file system is impossible - you can use Pulsar
(http://pulsar.readthedocs.org/en/latest/) for most kinds of jobs
(some jobs like data source jobs and upload jobs should remain on the
Galxay host in a such a configuration).

-John

On Wed, Jan 20, 2016 at 5:18 PM, D K <danielforti...@gmail.com> wrote:
> Hi,
>
> I would like to run the Galaxy framework on server A, while performing all
> of the jobs on server B using ssh.
>
> Looking at the documentation here:
> https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI, this
> seems like it should be possible. However, the documentation states that the
> cli runner requires, at a minimum, two parameters, one for the shell (which
> I'm selecting SecureShell) and a Job plugin. I'm not sure what this should
> be since the ones available are Torque, Slurm, and SlurmTorque and I'm not
> running any of these. Can anyone give me any hints? My current job_conf.xml
> looks like this:
>
> <?xml version="1.0"?>
> <job_conf>
>     <plugins>
> <!--        <plugin id="local" type="runner"
> load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>-->
>         <plugin id="cli" type="runner"
> load="galaxy.jobs.runners.cli:ShellJobRunner"/>
>     </plugins>
>     <handlers>
>         <handler id="main"/>
>     </handlers>
>     <destinations default="cli_default">
>         <destination id="cli_default" runner="cli">
>            <param id="shell_plugin">SecureShell</param>
>            <param id="job_plugin">cli</param>
>            <param id="shell_hostname">computehost</param>
>            <param id="shell_username">galaxy</param>
>         </destination>
>     </destinations>
> </job_conf>
>
>
> As an alternative, in a tool xml I tried directly sshing between the
> <command> tags e.g.:
>
>   <command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh --verbose >
> $output 2>&amp;1' </command>
>
> This appears to work. Is it possible to "inject" this ssh command into tool
> execution in Galaxy?
>
> I was looking to do this to avoid having to install a scheduler like SGE or
> using Pulsar. Any suggestions would be greatly appreciated.
>
> Thanks!
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to