>> Is there a way for a tool to sometimes be placed in the fast queue and
>> sometimes in the long queue?
> Not through Galaxy as far as I know.

Yes, this is possible using job parameterization. From universe.ini.sample:

# Per-tool job handler and runner overrides. Parameters can be included to 
define multiple
# runners per tool. E.g. to run Cufflinks jobs initiated from Trackster
# differently than standard Cufflinks jobs:
#   cufflinks = local:///
#   cufflinks[source@trackster] = local:///

This approach is definitely a beta feature, but the idea is that any set of 
key@value parameters should be able to be used to direct jobs to different 
queues as needed. 

Job parameterization is done in only one place right now, the tracks.py 
controller in rerun_tool The idea is that jobs run via Trackster are short, so 
they get a different queue:

subset_job, subset_job_outputs = tool.execute( trans, incoming=tool_params, 
"source" : "trackster" } )

> Right now I'd like to be able to allocate jobs to different queues
> based on the input data size (and thus the expected compute time
> and resource needed), but that is rather complicated. e.g. If you
> have a low memory queue and a high memory query.

To make this work, you'd want to modify the execute() method in the 
DefaultToolAction class (/lib/galaxy/tools/actions/__init__.py) to add job 
parameters based on either tool parameters and/or input dataset size.

> You might even want different queues according to the user
> (e.g. one group might have paid for part of the cluster and get
> priority access).

This could also be done in the same location as trans.user will give you the 
user running the tool/job.


Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:


Reply via email to