Hi, Galaxy Developers,

Is anybody out there managing a Galaxy environment that was designed and or has 
been tested to support 35 concurrent users?  The reason why I am asking this is 
because we [the U of C] have a training session coming up this Thursday, and 
the environment we have deployed needs to support this number of users.  We 
have put the server under as high as stress as possible with 6 users, and 
Galaxy has performed fine, however it has proven somewhat challenging to do 
load testing for all 35 concurrent users prior to the workshop.  I can't help 
but feel we are rolling the dice a little bit as we've never put the server 
under anything close to this load level, so I figured I would try to dot my i's 
by sending an email to this list.

Here are the configuration changes that are currently implemented (in terms of 
trying to performance tune and web scale our galaxy server):

1) Enabled proxy load balancing with six web front-ends (the number six pulled 
from Galaxy wiki) (Apache):

<Proxy balancer://galaxy/>

2) Rewrite static URLs for static content (Apache):

RewriteRule ^/static/style/(.*) 
/group/galaxy/galaxy-dist/static/uchicago_cri_august_2012_style/blue/$1 [L]
RewriteRule ^/static/scripts/(.*) 
/group/galaxy/galaxy-dist/static/scripts/packed/$1 [L]
RewriteRule ^/static/(.*) /group/galaxy/galaxy-dist/static/$1 [L]
RewriteRule ^/robots.txt /group/galaxy/galaxy-dist/static/robots.txt [L]
RewriteRule ^(.*) balancer://galaxy$1 [P]

3) Enabled compression and caching (Apache):
<Location "/">
        SetOutputFilter DEFLATE
        SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
        SetEnvIfNoCase Request_URI \.(?:t?gz|zip|bz2)$ no-gzip dont-vary
<Location "/static">
        ExpiresActive On
        ExpiresDefault "access plus 6 hours"

4) Configured web scaling (universe_wsgi.ini) :
        a) six web server processes (threadpool_workers = 7)
        b) a single job manager (threadpool_workers = 5)
        c) two job handlers (threadpool_workers = 5)

5) Configured a pbs_mom external job runner (our cluster), and commented out 
the default tool runners (to use pbs)  (we are not using the other tools for 
the workshop).

#ucsc_table_direct1 = local:///
#ucsc_table_direct_archaea1 = local:///
#ucsc_table_direct_test1 = local:///
#upload1 = local:///

6)  Changed the following database parameters (universe_wsgi.ini):
        database_engine_option_pool_size = 10
        database_engine_option_max_overflow = 20 

7) Disable the developer settings (universe_wsgi.ini):
        debug = False 
        use_interactive = False
         #filter-with = gzip

The server I have is a VM with the following resources:

2GB of RAM
4CPU Cores

I feel that it is also worthwhile to mention that users will not be downloading 
datasets during the workshop, so as of now, the implementation of "XSendFile" 
as specified in the Apache Proxy documentation is not of immediate concern.

Does anybody see any blaring mistakes where they think this configuration might 
fall short with respect to capacity planning for an environment of 35 
concurrent users, or additional tuning that could potentially assist in 
ensuring the availability of the server during the workshop?   Thank-you so 
much for your opinion(s), and please wish us luck this Thursday :-)

Dan Sullivan
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:


Reply via email to