Update : So, I did a little further testing - turns out that if I *don't use chaussette*, and instead call *anyserve.py* directly, then everything works smoothly with both g-event and redis:
[watcher:gevent] > working_dir = /srv/web2py > cmd = python anyserver.py -s gevent However, I then loose all of the advantages of using circus: - Running multiple-processes - Adding/Removing processes from the web-ui - Real-time graphing of cpu/memory usage ... And without multiple processes, there is no use for redis anyways, and it can't scale to our needs... *(hmmm... Come to think of it, if there are no multiple processes accessing redis, maybe this is why it works...)* So it's back to square one... Enter *chaussette*: [watcher:gevent] > working_dir = /srv/web2py > cmd = /usr/bin/chaussette --fd $(circus.sockets.gevent)* --backend gevent* > *gluon.main.wsgibase* numprocesses = 8 > use_sockets = True > [socket:gevent] > host = localhost > port = 8800 Now I'm binding to a ZeroMQ socket, and all circus-features work - but then I get the conditional-instability issues. A main difference I can see here is the way web2py is being launched - chaussette needs to be supplied with the WSGI application itself. What I did is give it the '*wsgibase*' function from *main.py* But perhaps that's the problem? Maybe something is missing when using gevent? If so, what? Because when using it without the gevent back-end, it works with redis no problem: [watcher:no_gevent] > working_dir = /srv/web2py > cmd = /usr/bin/chaussette --fd $(circus.sockets.no_gevent) > *gluon.main.wsgibase* numprocesses = 8 > use_sockets = True > [socket:no_gevent] > host = localhost > port = 8800 This uses the default http-server in python... On Tuesday, March 25, 2014 11:43:40 AM UTC+2, Arnon Marcus wrote: > > So, we came to the conclusion that combining the use of G-event as the > server, and redis for caching, injects instabilities, with random hangings > and crashing requests: > I ruled-out the factor of using 0.13.x versions of G-event vs. the 1.x > release - it happens in both cases. > I ruled-out using redis for session-store or just for caching - it happens > in any combination. > When I use any form of redis-integration, while using any other server > (i.e, rocket), everything works smoothly. > When I use G-event of any version, while not using redis for anything, > everything > works smoothly. > But when combining the two, and using G-event of any version while using > redis for anything (session-store and/or caching), things go haywire... > > Our application is mainly cpu-bound, and we use a physical-machine (with > no virtualization) with multiple-cores, so we want to take advantage of > all the cores. And since python can-not do multiple things at once (due > to the GIL), we need to run multiple python-processes. And since we rely > heavily on caching, with keys that gets updated periodically by the use of > the app, we really must have centralized-caching in some form. So for > now, we are temporarily using python's simple-http server (we are running > it via circus+chaussette, btw...), which makes things... well... noticeably > less-snappy... > > My suspected culprit is the use of thread-locking in the web2py's > redis-integration code. > Am I correct? > How can this be remedied? > Can I safely remove the thread-locking code entirely from web2py's > redis-modules (for single-thread servers like G-event)? > -- Resources: - http://web2py.com - http://web2py.com/book (Documentation) - http://github.com/web2py/web2py (Source code) - https://code.google.com/p/web2py/issues/list (Report Issues) --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.

