Re: [galaxy-dev] Puppet Modules: a new project

2014-03-14 Thread Olivier Inizan

Dear galaxy-dev,

After exchanges with Eric we have decided to create:
-a common puppet module for galaxy hosted on the puppet forge:
https://forge.puppetlabs.com/urgi/galaxy
-a common repository hosted on a public server:
We want to have the opinion of the galaxy-team and galaxy-dev community 
for the choice of the public server.

Should we push the code on github or bitbucket ?
Does this question matter anyway ? :)

Thanks for your answers,

Olivier


Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE

olivier.ini...@versailles.inra.fr

Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan


On Thu, 13 Mar 2014, Olivier Inizan wrote:


Hi Eric and Sys Admin,

It's a great news !

We have already published a puppet module for a basic galaxy install and 
update:

http://forge.puppetlabs.com/urgi/galaxy

We also plan to update it for configuration files management.
Eric, it could be a great initiative to merge our efforts in a single module.
Any ideas ?

Cheers,
Olivier.

Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE

olivier.ini...@versailles.inra.fr

Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan


On Wed, 12 Mar 2014, Eric Rasche wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sys Admins,

I'm working on some puppet modules for managing galaxy. I'm not sure
how many of you use puppet, but I figured it would be useful to the
community. I'll place them on puppet forge when I'm done.

So far I'm doing the following:

- - enabling initial deployment (updating?) with the vcsrepo module

- - managing the following files: job_conf.xml, tool_conf.xml,
tool_sheds_conf.xml, universe_wsgi.ini

If there's anything that you find important to be managed, or features
you'd like to see, please email me and let me know!

Cheers,
Eric

- --
Eric Rasche
Programmer II
Center for Phage Technology
Texas AM University
College Station, TX 77843
404-692-2048
e...@tamu.edu
rasche.e...@yandex.ru
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQIcBAEBAgAGBQJTIH0RAAoJEMqDXdrsMcpV2g8P/08nTnb/RLdz6WGfDbryHtBB
6Q5yr4HV2IM/idgjpLQX93jRCyahVuS1E8Sgc1x9ViGml93/ssHJZ9GIsf1PJM5/
erygrVS6njhWgefMQpoEipfTOHqxCvHhJk5xnvRc/EeUj0Lkj6YKoVpzae/6XilH
Tav9/ocTNrH40HBjuFGsK7Q/IP1C+vNr/q7GPXq6Ek6dWd23qxRCYszuGgS2t3s/
i/+YmZCwHH0I25ssujGsYzsrsTfOBVBNvAj9dUmvyE+9WSLZQVw4919VKXDKNuSs
7oFBPJD2hpqSKJxf7IFChFL9WhQ4e1gykx+Z+7MhBpekUeYvwIzABxFNmQLOdl+Q
4R+mEADyAopD2enl23XkePX/FRag839zYSeEOWjvEfC8qgUWFXEJyjJ9JLqXE+Bc
QOAMpmC/BMI5qIxIvKJ2ASZuqBXFxdRcGs1xaqJ2bWN+vm1fD+jzpD33KjHcegQr
gf0UvhJj3aoR0Pua3vuWkWP6wPBt08F6Qab/6tQGIeQUyzZRCGyVggd4SgN+Xml5
0OB5olauA+Nr0wD+tDiCQwE+iIL/U61nU5A0yCmEMVfBem8YyKPtjuNZ7E/OZ9fy
gv4uVmGDIfM+IySnWlmxqX2nYb8AODrZiQifT3kG93jh4HtNaoc4yTNeRYcFwTNi
r8tJpajbO8Q0MVVj+bWn
=aFP5
-END PGP SIGNATURE-
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Puppet Modules

2014-03-13 Thread Olivier Inizan

Hi Eric and Sys Admin,

It's a great news !

We have already published a puppet module for a basic galaxy install and 
update:

http://forge.puppetlabs.com/urgi/galaxy

We also plan to update it for configuration files management.
Eric, it could be a great initiative to merge our efforts in a single 
module.

Any ideas ?

Cheers,
Olivier.

Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE

olivier.ini...@versailles.inra.fr

Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan


On Wed, 12 Mar 2014, Eric Rasche wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sys Admins,

I'm working on some puppet modules for managing galaxy. I'm not sure
how many of you use puppet, but I figured it would be useful to the
community. I'll place them on puppet forge when I'm done.

So far I'm doing the following:

- - enabling initial deployment (updating?) with the vcsrepo module

- - managing the following files: job_conf.xml, tool_conf.xml,
tool_sheds_conf.xml, universe_wsgi.ini

If there's anything that you find important to be managed, or features
you'd like to see, please email me and let me know!

Cheers,
Eric

- --
Eric Rasche
Programmer II
Center for Phage Technology
Texas AM University
College Station, TX 77843
404-692-2048
e...@tamu.edu
rasche.e...@yandex.ru
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQIcBAEBAgAGBQJTIH0RAAoJEMqDXdrsMcpV2g8P/08nTnb/RLdz6WGfDbryHtBB
6Q5yr4HV2IM/idgjpLQX93jRCyahVuS1E8Sgc1x9ViGml93/ssHJZ9GIsf1PJM5/
erygrVS6njhWgefMQpoEipfTOHqxCvHhJk5xnvRc/EeUj0Lkj6YKoVpzae/6XilH
Tav9/ocTNrH40HBjuFGsK7Q/IP1C+vNr/q7GPXq6Ek6dWd23qxRCYszuGgS2t3s/
i/+YmZCwHH0I25ssujGsYzsrsTfOBVBNvAj9dUmvyE+9WSLZQVw4919VKXDKNuSs
7oFBPJD2hpqSKJxf7IFChFL9WhQ4e1gykx+Z+7MhBpekUeYvwIzABxFNmQLOdl+Q
4R+mEADyAopD2enl23XkePX/FRag839zYSeEOWjvEfC8qgUWFXEJyjJ9JLqXE+Bc
QOAMpmC/BMI5qIxIvKJ2ASZuqBXFxdRcGs1xaqJ2bWN+vm1fD+jzpD33KjHcegQr
gf0UvhJj3aoR0Pua3vuWkWP6wPBt08F6Qab/6tQGIeQUyzZRCGyVggd4SgN+Xml5
0OB5olauA+Nr0wD+tDiCQwE+iIL/U61nU5A0yCmEMVfBem8YyKPtjuNZ7E/OZ9fy
gv4uVmGDIfM+IySnWlmxqX2nYb8AODrZiQifT3kG93jh4HtNaoc4yTNeRYcFwTNi
r8tJpajbO8Q0MVVj+bWn
=aFP5
-END PGP SIGNATURE-
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] problem with current galaxy to submit job to drmaa/sge

2013-05-28 Thread Olivier Inizan

Hello tin,

For the error: DrmCommunicationException: code 2: range_list containes no 
element have you checked that you can submit jobs in -sync mode ?


See: 
http://stackoverflow.com/questions/4883056/sge-qsub-fails-to-submit-jobs-in-sync-mode

We faced with this error recently and setting MAX_DYN_EC=1000 for SGE fix 
the problem.


Let me know,

Olivier


Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE

olivier.ini...@versailles.inra.fr

Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan


On Thu, 23 May 2013, tin h wrote:


Hello galaxy-dev gurus, 

I was trying to upgrade my galaxy server...
I removed the old galaxy-dist and ran 
     hg clone https://bitbucket.org/galaxy/galaxy-dist/

restored universe_wsgi.ini file and various tool-data config, and tried to 
restart galaxy.




After some twiddling, I see the error message at the end of this email.
The strangest thing I see is this
             
/usr/prog/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/wrappers.py
on my current system, drmaa-0.4b3-py2.6.egg is a file and not a directory... 

is the latest code that I just downloaded corrupted or something?

Much thanks in advance for your help in this matter.
-Tin


PS.  Relevant entry in universe_wsgi.ini on cluster config:
            start_job_runners = drmaa
            default_cluster_job_runner =  drmaa:///



galaxy.tools.genome_index DEBUG 2013-05-24 08:42:49,150 Loaded genome index 
tool: __GENOME_INDEX__
galaxy.jobs.manager DEBUG 2013-05-24 08:42:49,153 Starting job handler
galaxy.jobs.runners DEBUG 2013-05-24 08:42:49,155 Starting 4 LocalRunner workers
galaxy.jobs DEBUG 2013-05-24 08:42:49,156 Loaded job runner 
'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
Traceback (most recent call last):
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py, 
line 35, in app_factory
    app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/app.py, line 159, in __init__
    self.job_manager = manager.JobManager( self )
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/jobs/manager.py, line 31, in 
__init__
    self.job_handler = handler.JobHandler( app )
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 29, in 
__init__
    self.dispatcher = DefaultJobDispatcher( app )
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 543, in 
__init__
    self.job_runners = self.app.job_config.get_job_runner_plugins()
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 486, in 
get_job_runner_plugins
    rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 
'kwds', {} ) )
  File /usr/prog/galaxy/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line 
75, in __init__
    self.ds.initialize()
  File 
/usr/prog/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/__init__.py, 
line 274, in initialize
    _w.init(contactString)
  File 
/usr/prog/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/wrappers.py, 
line 59, in init
    return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
  File 
/usr/prog/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/errors.py, line 
90, in error_check
    raise _ERRORS[code-1](code %s: %s % (code, error_buffer.value))
DrmCommunicationException: code 2: range_list containes no elements



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Load balancing and job configuration

2013-05-03 Thread Olivier Inizan


Thanks Adam when I specify handlers it works fine :)

I also try to add a drmaa plugin and create a new destination:

job_conf
plugins
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner/
plugin id=sge type=runner 
load=galaxy.jobs.runners.drmaa:DRMAAJobRunner/

/plugins
handlers default=handlers
handler id=server:handler0 tags=handlers/
handler id=server:handler1 tags=handlers/
/handlers

destinations default=local
destination id=sge_default runner=sge/
destination id=local runner=local/
/destinations
/job_conf

My env var DRMAA_LIBRARY_PATH is set to /usr/lib64/libdrmaa.so.1.0, and 
when I restart the instance I have the following error in handler's log 
files:


galaxy.jobs.manager DEBUG 2013-05-03 17:06:50,978 Starting job handler
galaxy.jobs.runners DEBUG 2013-05-03 17:06:50,979 Starting 4 LocalRunner 
workers
galaxy.jobs DEBUG 2013-05-03 17:06:50,980 Loaded job runner 
'galaxy.jobs.runners.local:LocalJobRunner' as 'local'

Traceback (most recent call last):
  File /home/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py, 
line 37, in app_factory

app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /home/galaxy/galaxy-dist/lib/galaxy/app.py, line 159, in __init__
self.job_manager = manager.JobManager( self )
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/manager.py, line 32, in 
__init__

self.job_handler = handler.JobHandler( app )
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 29, in 
__init__

self.dispatcher = DefaultJobDispatcher( app )
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/handler.py, line 543, in 
__init__

self.job_runners = self.app.job_config.get_job_runner_plugins()
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 486, 
in get_job_runner_plugins
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 
'kwds', {} ) )
  File /home/galaxy/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py, line 
75, in __init__

self.ds.initialize()
  File 
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/__init__.py, 
line 274, in initialize

_w.init(contactString)
  File 
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/wrappers.py, 
line 59, in init

return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
  File 
/home/galaxy/galaxy-dist/eggs/drmaa-0.4b3-py2.6.egg/drmaa/errors.py, 
line 90, in error_check

raise _ERRORS[code-1](code %s: %s % (code, error_buffer.value))
DrmCommunicationException: code 2: range_list containes no elements
Removing PID file handler0.pid


Any ideas ?
Thanks again for your help.




Olivier Inizan
Unité de Recherches en Génomique-Info (UR INRA 1164),
INRA, Centre de recherche de Versailles, bat.18
RD10, route de Saint Cyr
78026 Versailles Cedex, FRANCE

olivier.ini...@versailles.inra.fr

Tél: +33 1 30 83 38 25
Fax: +33 1 30 83 38 99
http://urgi.versailles.inra.fr [urgi.versailles.inra.fr]
Twitter: @OlivierInizan


On Thu, 2 May 2013, Adam Brenner wrote:


Oliver,

The new job running file is also providing me a lot of headaches. The
documentation on site is correct, but some of the items are not yet
implemented, for example, DRMAA external scripts still need to be in
the universe_wsgi.ini, yet the site say to put them in job_conf.xml
file! Lot of hair pulling and waiting response from IRC to figure this
out...wasn't fun.


As for your issue, you want to specify your handlers, not your
webserver for your handler types.

handler id=handler0 tags=handlers/
handler id=handler1 tags=handlers/

Let me know if this helps,
-Adam

--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences

Research Computing Support
Office of Information Technology
http://www.oit.uci.edu/rcs/

University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu

On Thu, May 2, 2013 at 9:00 AM, Olivier Inizan
olivier.ini...@versailles.inra.fr wrote:

Dear galaxy-dev-list,

I am trying to use new-style job configuration whith a load-balancing on 2
web servers and 2 job hanlders.
I have configured load balancing as follows (universe_wsgi.ini):

# Configuration of the internal HTTP server.

[server:web0]
use = egg:Paste#http
port = 8083
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7

[server:web1]
use = egg:Paste#http
port = 8082
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7

[server:manager]
use = egg:Paste#http
port = 8079
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5

[server:handler0]
use = egg:Paste#http
port = 8090
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5

[server:handler1]
use = egg:Paste#http
port = 8091
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5


This configuration works fine (manager.log):
galaxy.jobs.manager DEBUG 2013-04-23 12:20:22,901 Starting job handler
galaxy.jobs.runners DEBUG 2013-04-23 12:20:22,902

[galaxy-dev] Load balancing and job configuration

2013-05-02 Thread Olivier Inizan

Dear galaxy-dev-list,

I am trying to use new-style job configuration whith a load-balancing on 2 
web servers and 2 job hanlders.

I have configured load balancing as follows (universe_wsgi.ini):

# Configuration of the internal HTTP server.

[server:web0]
use = egg:Paste#http
port = 8083
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7

[server:web1]
use = egg:Paste#http
port = 8082
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 7

[server:manager]
use = egg:Paste#http
port = 8079
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5

[server:handler0]
use = egg:Paste#http
port = 8090
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5

[server:handler1]
use = egg:Paste#http
port = 8091
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 5


This configuration works fine (manager.log):
galaxy.jobs.manager DEBUG 2013-04-23 12:20:22,901 Starting job handler
galaxy.jobs.runners DEBUG 2013-04-23 12:20:22,902 Starting 5 LocalRunner 
workers
galaxy.jobs DEBUG 2013-04-23 12:20:22,904 Loaded job runner 
'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
galaxy.jobs.runners DEBUG 2013-04-23 12:20:22,909 Starting 3 LWRRunner 
workers
galaxy.jobs DEBUG 2013-04-23 12:20:22,911 Loaded job runner 
'galaxy.jobs.runners.lwr:LwrJobRunner' as 'lwr'
galaxy.jobs DEBUG 2013-04-23 12:20:22,911 Legacy destination with id 
'local:///', url 'local:///' converted, got params:
galaxy.jobs.handler DEBUG 2013-04-23 12:20:22,911 Loaded job runners 
plugins: lwr:local
galaxy.jobs.handler INFO 2013-04-23 12:20:22,912 job handler stop queue 
started

galaxy.jobs.handler INFO 2013-04-23 12:20:22,919 job handler queue started

To use new-style job configuration I have create the following 
job_conf.xml:



?xml version=1.0?
!-- A sample job config that explicitly configures job running the way it 
is configured by default (if there is no explicit config). --

job_conf
plugins
plugin id=local type=runner 
load=galaxy.jobs.runners.local:LocalJobRunner/

/plugins
handlers default=handlers
!-- Additional job handlers - the id should match the name of a
 [server:id] in universe_wsgi.ini.
 --
handler id=server:web0 tags=handlers/
handler id=server:web1 tags=handlers/
/handlers

destinations default=local
destination id=local runner=local/
/destinations
/job_conf


When I restart the instance the manager.log outputs:

galaxy.jobs DEBUG 2013-05-02 17:25:28,402 Loading job configuration from 
./job_conf.xml
galaxy.jobs DEBUG 2013-05-02 17:25:28,402 Read definition for handler 
'server:web0'
galaxy.jobs DEBUG 2013-05-02 17:25:28,403 Read definition for handler 
'server:web1'
galaxy.jobs DEBUG 2013-05-02 17:25:28,403 handlers default set to child 
with id or tag 'handlers'
galaxy.jobs DEBUG 2013-05-02 17:25:28,403 destinations default set to 
child with id or tag 'local'

galaxy.jobs DEBUG 2013-05-02 17:25:28,403 Done loading job configuration

So everything seems fine but when I try to run a tool, the corresponding 
job is in permanent waiting status (gray color in history).


I think I have missed something in the configuration process,
Thanks for your help,

Olivier


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/