Thanks Adam when I specify handlers it works fine :)
I also try to add a drmaa plugin and create a new destination:
job_conf
plugins
plugin id=local type=runner
load=galaxy.jobs.runners.local:LocalJobRunner/
plugin id=sge type=runner
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner/
/plugins
handlers default=handlers
handler id=server:handler0 tags=handlers/
handler id=server:handler1 tags=handlers/
/handlers
destinations default=local
destination id=sge_default runner=sge/
destination id=local runner=local/
/destinations
/job_conf
My env var DRMAA_LIBRARY_PATH is set to /usr/lib64/libdrmaa.so.1.0, and
when I restart the instance I have the following error in handler's log
files:
galaxy.jobs.manager DEBUG 2013-05-03 17:06:50,978 Starting job handler
galaxy.jobs.runners DEBUG 2013-05-03 17:06:50,979 Starting 4 LocalRunner
workers
galaxy.jobs DEBUG 2013-05-03 17:06:50,980 Loaded job runner
'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
Traceback (most recent call last):
File /home/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py,
line 37, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File /home/galaxy/galaxy-dist/lib/galaxy/app.py, line 159, in __init__
self.job_manager = manager.JobManager( self )
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/manager.py, line 32, in
__init__
self.job_handler = handler.JobHandler( app )
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 29, in
__init__
self.dispatcher = DefaultJobDispatcher( app )
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 543, in
__init__
self.job_runners = self.app.job_config.get_job_runner_plugins()
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 486,
in get_job_runner_plugins
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get(
'kwds', {} ) )
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line
75, in __init__
self.ds.initialize()
File
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/__init__.py,
line 274, in initialize
_w.init(contactString)
File
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/wrappers.py,
line 59, in init
return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
File
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/errors.py,
line 90, in error_check
raise _ERRORS[code-1](code %s: %s % (code, error_buffer.value))
DrmCommunicationException: code 2: range_list containes no elements
Removing PID file handler0.pid
Any ideas ?
Thanks again for your help.
Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE
olivier.ini...@versailles.inra.fr
Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan
On Thu, 2 May 2013, Adam Brenner wrote:
Oliver,
The new job running file is also providing me a lot of headaches. The
documentation on site is correct, but some of the items are not yet
implemented, for example, DRMAA external scripts still need to be in
the universe_wsgi.ini, yet the site say to put them in job_conf.xml
file! Lot of hair pulling and waiting response from IRC to figure this
out...wasn't fun.
As for your issue, you want to specify your handlers, not your
webserver for your handler types.
handler id=handler0 tags=handlers/
handler id=handler1 tags=handlers/
Let me know if this helps,
-Adam
--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences
Research Computing Support
Office of Information Technology
http://www.oit.uci.edu/rcs/
University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu
On Thu, May 2, 2013 at 9:00 AM, Olivier Inizan
olivier.ini...@versailles.inra.fr wrote:
Dear galaxy-dev-list,
I am trying to use new-style job configuration whith a load-balancing on 2
web servers and 2 job hanlders.
I have configured load balancing as follows (universe_wsgi.ini):
# Configuration of the internal HTTP server.
[server:web0]
use = egg:Paste#http
port = 8083
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7
[server:web1]
use = egg:Paste#http
port = 8082
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7
[server:manager]
use = egg:Paste#http
port = 8079
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5
[server:handler0]
use = egg:Paste#http
port = 8090
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5
[server:handler1]
use = egg:Paste#http
port = 8091
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5
This configuration works fine (manager.log):
galaxy.jobs.manager DEBUG 2013-04-23 12:20:22,901 Starting job handler
galaxy.jobs.runners DEBUG 2013-04-23 12:20:22,902