Remi Delon wrote:
On our own servers we've been using CGI connectors (wkcgi, Zope.cgi), which seem fast enough, and of course won't be crashing Apache.


Yeah, but we wanted a somewhat "standard" way of talking to Apache and
most frameworks do come with a small HTTP server, so that works fine for
us and it also completely isolates the process from Apache.

CGI is pretty standard, isn't it? I think of the adapters as little pieces of the frameworks themselves. Or just a simpler, more isolated alternative to mod_*.


Have you looked at Supervisor for long running processes?
http://www.plope.com/software/supervisor/
I haven't had a chance to use it, but it looks useful for this sort of thing.


Well, there are several such supervising tools (daemontools is another
one), but again, they never matched our exact needs. For instance,
sometimes it's OK if a process is down ... it could just be that the
user is working on his site. And also, they usually only watch one
thing: make sure that the process stays up, but there are a million
other things we wanted to watch for. So we just wrote our own scripts.

Unlike daemontools, Supervisor is written in Python, which makes it good ;) It also seems like it's meant ot address just the kind of situation you're in -- allowing users to restart servers despite having different permissions, monitoring servers, etc.


HTTP does seem like a reasonable way to communicate between servers, instead of all these ad hoc HTTP-like protocols (PCGI, SCGI, FastCGI, mod_webkit, etc). My only disappointment with that technique is that you lose some context -- e.g., if REMOTE_USER is set, or SCRIPT_NAME/PATH_INFO (you probably have to configure your URLs, since they aren't detectable), mod_rewrite's additional environmental variables, etc. Hmm... I notice you use custom headers for that (CP-Location), and I suppose other variables could also be passed through... it's just unfortunate because that significantly adds to the Apache configuration, which is something I try to avoid -- it's easy enough to put in place, but hard to maintain.


The CP-Location trick is not needed (I should remove it from this page
as it confuses people).
Have a look at the section called "What are the drawbacks of running
CherryPy behind Apache ?" on this page:
http://www.cherrypy.org/wiki/CherryPyProductionSetup
It summarizes my view on this (basically, there aren't any real drawbacks if you're using mod_rewrite with Apache2).

Does Apache 2 add a X-Original-URI header or something? I see the Forwarded-For and Forwarded-Host headers, but those are only part of the request -- leaving out REMOTE_USER, SCRIPT_NAME, PATH_INFO, and maybe some other internal variables.


I've though that a forking server with a parent that monitored children carefully would be nice, which would be kind of a per-process monitor. It would mean I'd have to start thinking multiprocess, reversing all my threaded habits, but I think I'm willing to do that in return for really good reliability.


I'm still very much on the "thread pool" camp :-)
I've got CherryPy sites that run in a thread pool mode for months without any stability or memory leak problem.
If your process crashes or leaks memory then there's something wrong with your program in the first place, and the right way to solve it is not to switch to a multiprocess model.
Finally, if you want a monitoring process, it can be a completely separate process which allows you to still keep a "thread pool" model for your main process.

That's true -- cleanly killing a threaded app can be hard, though, at least in my experience. The other issue I worry about is scaling down while still having separation -- like if I want a simple form->mail script, how do I deploy that? A separate threaded process is really heavyweight. But is it a good idea to put it in a process shared with another application? This is what leads me in the direction of multiple processes, even though I've been using thread pools for most of my applications in the past without problem.


But above all, I think that the main reason why python frameworks are not more commonly supported by the big hosting providers is because the market for these frameworks is very small (apart from Zope/Plone). For all the "smaller" frameworks (CherryPy, Webware, SkunkWeb, Quixote, ...) we host less than 50 of each, so the big hosting providers simply won't bother learning these frameworks and supporting them for such a small market.


If they could support all of them at once, do you think it would be more interesting to hosting providers?


Well, if all frameworks came in nicely packaged RPMs and they all integrated the same way with Apache (mod_wsgi anyone ?) I guess that would be a big step forward ... But you'd still have the problem of all the python third-party modules that people need ...

Would mod_scgi be sufficient? It's essentially equivalent to mod_webkit, mod_skunkweb, and PCGI, while avoiding the hassle of FastCGI. In theory FastCGI is the way to do all of this, but despite my best efforts I can never get it to work. Well "best efforts" might indicate more work than I've actually put into it, but enough effort to leave me thoroughly annoyed ;)


--
Ian Bicking  /  [EMAIL PROTECTED]  /  http://blog.ianbicking.org
_______________________________________________
Web-SIG mailing list
Web-SIG@python.org
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: 
http://mail.python.org/mailman/options/web-sig/archive%40mail-archive.com

Reply via email to