We're seeing an odd issue that seems to be related to Trackster's caching of 
track information, mainly that memory in the web server processes increases.  
It appears that tracks are being cached in memory per web process.

If I create a new visualization (and monitor our web processed via top) we can 
see the %mem usage go up, so much so that it eventually crashes with an 
out-of-mem issue.  This seems to happen only when new visualizations are 
created; when they are shared between users mem usage stays roughly the same.  

We can bump up the mem on the VM (we have more at our disposal), and we can 
monitor and restart the processes if they get too high, but is there a way to 
determine whether Trackster is the cause?

Below is the top output (note the manager and handlers are still normal-ish 
range):

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND  
27777 galaxy    20   0 2637m 1.5g 2700 S  6.3 19.8   7:29.01 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=web2 
--pid-file=web2.pid --log-file=web2.log --daemon        
27759 galaxy    20   0 3624m 2.2g 2716 S  5.3 28.9   9:04.33 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=web1 
--pid-file=web1.pid --log-file=web1.log --daemon        
27749 galaxy    20   0 2621m 1.5g 2724 S  5.0 19.7   7:39.51 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=web0 
--pid-file=web0.pid --log-file=web0.log --daemon        
27808 galaxy    20   0 1616m 160m 2640 S  1.7  2.0   3:00.21 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=handler1 
--pid-file=handler1.pid --log-file=handler1.log --da
27798 galaxy    20   0 1616m 159m 2652 S  1.0  2.0   2:59.03 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=handler0 
--pid-file=handler0.pid --log-file=handler0.log --da
27789 galaxy    20   0  944m  88m 2400 S  0.0  1.1   0:36.40 python 
./scripts/paster.py serve universe_wsgi.ini --server-name=manager 
--pid-file=manager.pid --log-file=manager.log --d

chris
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to