I'm trying to keep my django powered cms online even when I have to
restart the app or take it down for maintenance.

My rendered pages are generally quite cacheable and they only change
if someone content manages a page or someone comments, etc. which
happens relatively rarely.

I have a very crude page caching mechanism where I basically save the
html along with the last-modified date and the full url in a table in
the database (or wherever). Whenever anything on the site changes, I
just set a site-wide last-modified datetime. Then whenever a page gets
loaded, I check the last modified date/time of the cached page against
the site-wide one and either return the cached copy or I re-render it,
save the new one and return it. This almost always happens when the
admin/content manager/author person checks the page he just edited, so
by the time a "normal" user comes along the new page is already in the
cache. This already gets me a near-100% cache-hit rate.

(It is actually slightly more complicated than this, but that is not
important because most of the logic has to do with figuring out when
it is safe to write something to the cache.)

So basically.. It is very easy to write a tiny/dumb little wsgi app
that can just check the cached pages table to see if there is a cached
page for a specific url, skip the freshness check and just return
that. I want to use this little app as an automatic fallback in case
the main one is down for maintenance. It won't be able to keep the /
admin/ interface up, but in that case it can just return a "Don't
worry. Your site is still up and running, but you won't be able to
manage your site for the next 5 minutes while we do maintenance." type
error page.

Anyway.. obviously I need to be running a proxy that will check the
main app first and if that's down (nothing running on that port) it
should fall back to the backup backend. I think it should be able to
do this immediately without any health monitoring that might take up
to seconds to realise that the backend is gone, then seconds to
realise that it is up again resulting in errors in the meantime.

I tried Varnish, but couldn't really get it to do what I want and I
also had unrelated intermittent errors that I just couldn't debug/fix
and in the end I realised I really don't need a caching proxy at this
stage. I looked through the documentation and features lists of
various other reverse proxies and I couldn't find anything that
supports this straight out of the box. I'm on the verge of writing my
own, but thinking that there must be something out there already.


My setup is currently:

the internet -> lighttpd -> django app via fastcgi (flup)


I'm thinking of moving to:

the internet -> lighttpd -> some proxy ->
(1) pure python wsgi/web server or
(2) backup pure python little webserver
(so the proxy tries 1, then 2)


In all these cases lighttpd handles all the static file requests and
then proxies the dynamic ones along (either via plain http or
fastcgi). I'm not so worried about keeping lighttpd up and running at
this stage - it is very stable. I'm more concerned about being able to
gracefully take the app down.

I'm looking at switching the flup based process over to spawning:
http://pypi.python.org/pypi/Spawning/ which sounds like it could keep
my app up and running during normal/routine restarts, but it won't
help with slightly longer (+/- 5 minute) maintenance periods where I
have to take the app down to be safe.

Does anyone have any suggestions? How do other people handle this?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to