hallo again, fellow galaxy users and developers,

as an extension to my original query, i am now wondering how the
parameters in ‘job_resource_params_conf.xml’ map onto SLURM options?
for example, i assume <param ... name="processors" ...> maps onto
something like ‘--ntasks’ (or maybe ‘--ntasks-per-node’).  are the
‘name’ values in the definition of job resource parameters standard
keys defined for DRMAA, and drmaa-python know how to map these into
SLURM parameters?  or is there an explicit specification of that
mapping somewhere?

we have succeeded in establishing per-tool defaults by putting these
into the ‘nativeSpecifiation’ of multiple variants of the DRMAA
destination.  but now we would also like to customize the valid range
and initial value that is presented to users when they decide to use
the ‘custom’ job resource form in the tool configuration dialogue.  in
other words, we would like to do something like the following in
‘job_resource_params_conf.xml’

  <param label="Memory" name="memory1" type="integer" size="2" min="1"
max="16" value="1" ... />
  <param label="Memory" name="memory4" type="integer" size="2" min="4"
max="24" value="4" ... />
  <param label="Memory" name="memory6" type="integer" size="2" min="6"
max="24" value="6" ... />

and then associate a specific memory parameter with individual tools
in ‘job_conf.xml’.  but for that to work, i would have to understand
the mapping to SLURM options and make it so that ‘memory1’ to
‘memory6’ all map to ‘--mem’ (or maybe ‘--mme-per-cpu’).

once i understand things better, i would of course be happy to
contribute a summary for the galaxy wiki.  for all i can see, current
documentation does not cover job configuration and job resources in
full detail.

with thanks in advance, oe


On Sun, Mar 13, 2016 at 2:31 PM, Stephan Oepen <o...@ifi.uio.no> wrote:
> many thanks for taking the time to answer my query, gildas!
>
>> In your job_conf.xml, you can set per tool a destination.
>
> i had realized that much (sending some of our tools to SLURM, running
> others on the local node), but i had failed to realize that one can of
> course have /multiple/ SLURM destinations, which all send to the same
> cluster but differ in their default resource parameters.
>
> thanks again, oe
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to