[galaxy-dev] DRMAA Slurm error

2014-09-25 Thread Pardo Diaz, Alfonso
Hi,


I have configured a new galaxy-project site with SLURM (version 14). I have one 
server with a Galaxy-Project instance, one node with SLURM server and two SLURM 
worker nodes. I have compile SLURM-DRMAA from source codes. When I run 
“drmaa-run /bin/hostname” it’s work. But, when I try to run the server I got 
the next error:

Traceback (most recent call last):
  File /home/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py, line 39, in 
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /home/galaxy-dist/lib/galaxy/app.py, line 141, in __init__
self.job_manager = manager.JobManager( self )
  File /home/galaxy-dist/lib/galaxy/jobs/manager.py, line 23, in __init__
self.job_handler = handler.JobHandler( app )
  File /home/galaxy-dist/lib/galaxy/jobs/handler.py, line 32, in __init__
self.dispatcher = DefaultJobDispatcher( app )
  File /home/galaxy-dist/lib/galaxy/jobs/handler.py, line 704, in __init__
self.job_runners = self.app.job_config.get_job_runner_plugins( 
self.app.config.server_name )
  File /home/galaxy-dist/lib/galaxy/jobs/__init__.py, line 621, in 
get_job_runner_plugins
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 
'kwds', {} ) )
  File /home/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line 81, in 
__init__
self.ds.initialize()
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/session.py, line 
257, in initialize
py_drmaa_init(contactString)
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/wrappers.py, line 
73, in py_drmaa_init
return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/errors.py, line 
151, in error_check
raise _ERRORS[code - 1](error_string)
AlreadyActiveSessionException: code 11: DRMAA session already exist.
[root@galaxy-project galaxy-dist]#


This is my “job_conf.xml”:

job_conf
plugins workers=4
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner/
plugin id=drmaa type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner/
plugin id=cli type=runner 
load=galaxy.jobs.runners.cli:ShellJobRunner /
plugin id=slurm type=runner 
load=galaxy.jobs.runners.slurm:SlurmJobRunner
param id=drmaa_library_path/usr/local/lib/libdrmaa.so/param
/plugin
/plugins
handlers
handler id=main/
/handlers
destinations default=drmaa_slurm
destination id=local runner=local/
destination id=multicore_local runner=local
  param id=local_slots4/param
  param id=embed_metadata_in_jobTrue/param
  job_metrics /
/destination
destination id=docker_local runner=local
  param id=docker_enabledtrue/param
/destination
destination id=drmaa_slurm runner=drmaa
param 
id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
param 
id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
param 
id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
/destination
destination id=direct_slurm runner=slurm
param id=nativeSpecification--time=00:01:00/param
/destination
/destinations
resources default=default
  group id=default/group
  group id=memoryonlymemory/group
  group id=allprocessors,memory,time,project/group
/resources
tools
tool id=foo handler=trackster_handler
param id=sourcetrackster/param
/tool
tool id=bar destination=dynamic/
tool id=longbar destination=dynamic resources=all /
tool id=baz handler=special_handlers destination=bigmem/
/tools
limits
limit type=registered_user_concurrent_jobs2/limit
limit type=anonymous_user_concurrent_jobs1/limit
limit type=destination_user_concurrent_jobs id=local1/limit
limit type=destination_user_concurrent_jobs tag=mycluster2/limit
limit type=destination_user_concurrent_jobs tag=longjobs1/limit
limit type=destination_total_concurrent_jobs id=local16/limit
limit type=destination_total_concurrent_jobs 
tag=longjobs100/limit
limit type=walltime24:00:00/limit
limit type=output_size10GB/limit
/limits
/job_conf


Can you help me? I am newbie with “Galaxy Project” administration.




THANKS IN ADVANCE





Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37

[CETA-Ciemat logo]http://www.ceta-ciemat.es/



Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha 

Re: [galaxy-dev] DRMAA Slurm error

2014-09-25 Thread Pardo Diaz, Alfonso
Solved!


The problem was I have configured the job_conf.xml two plugins entry with 
DRMAA. I have deleted the entry:

 plugin id=slurm type=runner 
load=galaxy.jobs.runners.slurm:SlurmJobRunner

And now works!


Thanks



El 25/09/2014, a las 08:12, Pardo Diaz, Alfonso 
alfonso.pa...@ciemat.esmailto:alfonso.pa...@ciemat.es escribió:

Hi,


I have configured a new galaxy-project site with SLURM (version 14). I have one 
server with a Galaxy-Project instance, one node with SLURM server and two SLURM 
worker nodes. I have compile SLURM-DRMAA from source codes. When I run 
“drmaa-run /bin/hostname” it’s work. But, when I try to run the server I got 
the next error:

Traceback (most recent call last):
  File /home/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py, line 39, in 
app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /home/galaxy-dist/lib/galaxy/app.py, line 141, in __init__
self.job_manager = manager.JobManager( self )
  File /home/galaxy-dist/lib/galaxy/jobs/manager.py, line 23, in __init__
self.job_handler = handler.JobHandler( app )
  File /home/galaxy-dist/lib/galaxy/jobs/handler.py, line 32, in __init__
self.dispatcher = DefaultJobDispatcher( app )
  File /home/galaxy-dist/lib/galaxy/jobs/handler.py, line 704, in __init__
self.job_runners = self.app.job_config.get_job_runner_plugins( 
self.app.config.server_name )
  File /home/galaxy-dist/lib/galaxy/jobs/__init__.py, line 621, in 
get_job_runner_plugins
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 
'kwds', {} ) )
  File /home/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line 81, in 
__init__
self.ds.initialize()
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/session.py, line 
257, in initialize
py_drmaa_init(contactString)
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/wrappers.py, line 
73, in py_drmaa_init
return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
  File /home/galaxy-dist/eggs/drmaa-0.7.6-py2.6.egg/drmaa/errors.py, line 
151, in error_check
raise _ERRORS[code - 1](error_string)
AlreadyActiveSessionException: code 11: DRMAA session already exist.
[root@galaxy-project galaxy-dist]#


This is my “job_conf.xml”:

job_conf
plugins workers=4
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner/
plugin id=drmaa type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner/
plugin id=cli type=runner 
load=galaxy.jobs.runners.cli:ShellJobRunner /
plugin id=slurm type=runner 
load=galaxy.jobs.runners.slurm:SlurmJobRunner
param id=drmaa_library_path/usr/local/lib/libdrmaa.so/param
/plugin
/plugins
handlers
handler id=main/
/handlers
destinations default=drmaa_slurm
destination id=local runner=local/
destination id=multicore_local runner=local
  param id=local_slots4/param
  param id=embed_metadata_in_jobTrue/param
  job_metrics /
/destination
destination id=docker_local runner=local
  param id=docker_enabledtrue/param
/destination
destination id=drmaa_slurm runner=drmaa
param 
id=galaxy_external_runjob_scriptscripts/drmaa_external_runner.py/param
param 
id=galaxy_external_killjob_scriptscripts/drmaa_external_killer.py/param
param 
id=galaxy_external_chown_scriptscripts/external_chown_script.py/param
/destination
destination id=direct_slurm runner=slurm
param id=nativeSpecification--time=00:01:00/param
/destination
/destinations
resources default=default
  group id=default/group
  group id=memoryonlymemory/group
  group id=allprocessors,memory,time,project/group
/resources
tools
tool id=foo handler=trackster_handler
param id=sourcetrackster/param
/tool
tool id=bar destination=dynamic/
tool id=longbar destination=dynamic resources=all /
tool id=baz handler=special_handlers destination=bigmem/
/tools
limits
limit type=registered_user_concurrent_jobs2/limit
limit type=anonymous_user_concurrent_jobs1/limit
limit type=destination_user_concurrent_jobs id=local1/limit
limit type=destination_user_concurrent_jobs tag=mycluster2/limit
limit type=destination_user_concurrent_jobs tag=longjobs1/limit
limit type=destination_total_concurrent_jobs id=local16/limit
limit type=destination_total_concurrent_jobs 
tag=longjobs100/limit
limit type=walltime24:00:00/limit
limit type=output_size10GB/limit
/limits
/job_conf


Can you help me? I am newbie with “Galaxy Project” administration.




THANKS IN ADVANCE





Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37

[CETA-Ciemat logo]http://www.ceta-ciemat.es