Hello Luca,

  This is an active area of development for the core team. It is
fairly ingrained in Galaxy that workers and the the web server share a
networked file system - so this is a real challenge. I would review
the following recent thread:

http://dev.list.galaxyproject.org/Managing-Data-Locality-td4662438.html

  If you are interested in pushing the cutting edge of Galaxy job
running development - you can consider setting up an LWR server on
your cluster login node - but I wouldn't say this is well documented
and I would only recommend this for a few tools at a time - I am not
convinced it is a general purpose replacement for traditional job
runners at this time. For instance, there is no integration with the
tool shed at this time.

Hope this helps,
-John


On Sat, Feb 1, 2014 at 3:23 AM, Luca Toldo <lucato...@gmail.com> wrote:
> dear All,
> my  galaxy server can NFS share its disks with the head of the cluster,
> however the compute nodes cannot see those disks since they can see only
> what the head of the cluster provides.
>
> My cluster nodes are diskless.
>
> I'd appreciate advice, if someone had already installed galaxy in such
> configuration.
> Unfortunately installing galaxy-dev on the head of the cluster is not an
> option.
>
> Looking forward your advice.
> Luca
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to