Solved!!!!!

The problem was I have configured the job_conf.xml two plugins entry with 
DRMAA. I have deleted the entry:

 <plugin id="slurm" type="runner" 
load="galaxy.jobs.runners.slurm:SlurmJobRunner">

And now works!


Thanks



El 25/09/2014, a las 08:12, Pardo Diaz, Alfonso 
<alfonso.pa...@ciemat.es<mailto:alfonso.pa...@ciemat.es>> escribió:

Hi,


I have configured a new galaxy-project site with SLURM (version 14). I have one 
server with a Galaxy-Project instance, one node with SLURM server and two SLURM 
worker nodes. I have compile SLURM-DRMAA from source codes. When I run 
“drmaa-run /bin/hostname” it’s work. But, when I try to run the server I got 
the next error:

Traceback (most recent call last):
  File "/home/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 39, in 
app_factory
    app = UniverseApplication( global_conf = global_conf, **kwargs )
  File "/home/galaxy-dist/lib/galaxy/app.py", line 141, in __init__
    self.job_manager = manager.JobManager( self )
  File "/home/galaxy-dist/lib/galaxy/jobs/manager.py", line 23, in __init__
    self.job_handler = handler.JobHandler( app )
  File "/home/galaxy-dist/lib/galaxy/jobs/handler.py", line 32, in __init__
    self.dispatcher = DefaultJobDispatcher( app )
  File "/home/galaxy-dist/lib/galaxy/jobs/handler.py", line 704, in __init__
    self.job_runners = self.app.job_config.get_job_runner_plugins( 
self.app.config.server_name )
  File "/home/galaxy-dist/lib/galaxy/jobs/__init__.py", line 621, in 
get_job_runner_plugins
    rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 
'kwds', {} ) )
  File "/home/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py", line 81, in 
__init__
    self.ds.initialize()
  File "/home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/session.py", line 
257, in initialize
    py_drmaa_init(contactString)
  File "/home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/wrappers.py", line 
73, in py_drmaa_init
    return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
  File "/home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/errors.py", line 
151, in error_check
    raise _ERRORS[code - 1](error_string)
AlreadyActiveSessionException: code 11: DRMAA session already exist.
[root@galaxy-project galaxy-dist]#


This is my “job_conf.xml”:

<job_conf>
    <plugins workers="4">
        <plugin id="local" type="runner" 
load="galaxy.jobs.runners.local:LocalJobRunner"/>
<plugin id="drmaa" type="runner" 
load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
        <plugin id="cli" type="runner" 
load="galaxy.jobs.runners.cli:ShellJobRunner" />
        <plugin id="slurm" type="runner" 
load="galaxy.jobs.runners.slurm:SlurmJobRunner">
            <param id="drmaa_library_path">/usr/local/lib/libdrmaa.so</param>
        </plugin>
    </plugins>
    <handlers>
        <handler id="main"/>
    </handlers>
    <destinations default="drmaa_slurm">
        <destination id="local" runner="local"/>
        <destination id="multicore_local" runner="local">
          <param id="local_slots">4</param>
          <param id="embed_metadata_in_job">True</param>
          <job_metrics />
        </destination>
        <destination id="docker_local" runner="local">
          <param id="docker_enabled">true</param>
        </destination>
        <destination id="drmaa_slurm" runner="drmaa">
            <param 
id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param>
            <param 
id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param>
            <param 
id="galaxy_external_chown_script">scripts/external_chown_script.py</param>
        </destination>
        <destination id="direct_slurm" runner="slurm">
            <param id="nativeSpecification">--time=00:01:00</param>
        </destination>
    </destinations>
    <resources default="default">
      <group id="default"></group>
      <group id="memoryonly">memory</group>
      <group id="all">processors,memory,time,project</group>
    </resources>
    <tools>
        <tool id="foo" handler="trackster_handler">
            <param id="source">trackster</param>
        </tool>
        <tool id="bar" destination="dynamic"/>
        <tool id="longbar" destination="dynamic" resources="all" />
        <tool id="baz" handler="special_handlers" destination="bigmem"/>
    </tools>
    <limits>
        <limit type="registered_user_concurrent_jobs">2</limit>
        <limit type="anonymous_user_concurrent_jobs">1</limit>
        <limit type="destination_user_concurrent_jobs" id="local">1</limit>
        <limit type="destination_user_concurrent_jobs" tag="mycluster">2</limit>
        <limit type="destination_user_concurrent_jobs" tag="longjobs">1</limit>
        <limit type="destination_total_concurrent_jobs" id="local">16</limit>
        <limit type="destination_total_concurrent_jobs" 
tag="longjobs">100</limit>
        <limit type="walltime">24:00:00</limit>
        <limit type="output_size">10GB</limit>
    </limits>
</job_conf>


Can you help me? I am newbie with “Galaxy Project” administration.




THANKS IN ADVANCE!!!!





Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37

[CETA-Ciemat logo]<http://www.ceta-ciemat.es/>



----------------------------
Confidencialidad:
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o
confidencial. Si no es vd. el destinatario indicado, queda notificado de que la 
utilización, divulgación y/o copia sin autorización está
prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por 
error, le rogamos que nos lo comunique
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer:
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received
this e-mail in error you are hereby notified that any dissemination, copy or 
disclosure of this communication is strictly prohibited and
may be unlawful. In this case, please notify us by a reply and delete this 
email and its contents immediately.
----------------------------



___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to