On Tue, May 13, 2014 at 11:11 AM, Guillaume Penderia
<g.pende...@gmail.com> wrote:
> Hi,
>
> I am currently working on a workflow with some custom tools, and one of
> these tools has to create very big temporary files (like 45Go big).
> As this workflow will be used on a lot of file at the same time, I have to
> keep it from running more than once or twice at the same time (the other
> execution would wait in the queue). If I don't, I'm afraid that some memory
> lack or something could cause all the executions to fail and stop.
>
> The problem is : I can't find if it's possible to do that, and if it is, how
> do I do it.
>
> Anyone has an idea please ?

If you are using a cluster, one idea would be to setup a dedicated
queue for these big jobs, configured to ensure only one runs at a
time. Or at least, only one per cluster node.

Peter
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to