On Thu, Jul 21, 2016 at 10:35 AM, Shane Sturrock <sh...@biomatters.com> wrote:
>
> I’m just using the drmaa plugin so I’m guessing I need to either specify a
> job runner for BLAST which sets GALAXY_SLOTS. The documentation isn’t
> entirely clear although it seems I need to create a job_conf.xml which sets
> local_slots to a value for runner drmaa unless there’s a way to set that in
> the galaxy.ini file instead.

There is a legacy job setting via galaxy.ini aka universe_wsgi.ini where
we used to use this:

[galaxy:tool_runners]
ncbi_blastp_wrapper  = drmaa://-V -pe smp 4/
ncbi_blastn_wrapper  = drmaa://-V -pe smp 4/
ncbi_blastx_wrapper  = drmaa://-V -pe smp 4/
ncbi_tblastn_wrapper = drmaa://-V -pe smp 4/
ncbi_tblastx_wrapper = drmaa://-V -pe smp 4/
blast_reciprocal_best_hits = drmaa://-V -pe smp 4/

The -V was an SGE switch to copy the environment variables,
and -pe smp 4 is for a 4 core job.

For job_conf.xml I am trying something similar with less repetition:

...
        <!-- Default queue on the SGE cluster (four cores) -->
        <destination id="all4.q" runner="sge" tags="gruffalo">
            <param id="nativeSpecification">-pe smp 4</param>
            <env file="/mnt/shared/galaxy/apps/galaxy-paths.sh"/>
        </destination>
...
        <tool id="ncbi_blastp_wrapper" destination="all4.q"/>
        <tool id="ncbi_blastn_wrapper" destination="all4.q"/>
        <tool id="ncbi_blastx_wrapper" destination="all4.q"/>
        <tool id="ncbi_tblastn_wrapper" destination="all4.q"/>
        <tool id="ncbi_tblastx_wrapper" destination="all4.q"/>
        <tool id="ncbi_rpsblast_wrapper" destination="all4.q"/>
        <tool id="ncbi_rpstblastn_wrapper" destination="all4.q"/>
        <tool id="blast_reciprocal_best_hits" destination="all4.q"/>
...


i.e. I've set the computationally heavy BLAST+ tools to use a
destination configured to use just four cores. Because this is
SGE, we do this via the native syntax of -pe smp 4, i.e.
just like at the command line we'd do:

$ qsub -pe smp 4 ...

This detail will be different for your LSF cluster.

Peter
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to