On Tue, 2008-01-29 at 22:07 -0600, James Bennett wrote:
> On Jan 29, 2008 10:04 PM, Mark Green <[EMAIL PROTECTED]> wrote:
> > Just curious, what's the state of connection pooling in django?
> 
> My personal opinion is that the application level (e.g., Django) is
> the wrong place for connection pooling and for the equivalent "front
> end" solution of load balancing your web servers: the less the
> application layer has to know about what's in front of and behind it,
> the more flexible it will be (since you can make changes without
> having to alter your application-layer code).
> 
> So, for example, connection pooling for Postgres would best be handled
> by a dedicated pooling connection manager like pgpool; Django can
> connect to pgpool as if it's simply a Postgres database, which means
> you don't have to go specifying pooling parameters at the application
> level.

Hm, that doesn't sit so well with me.
I agree on the loadbalancer front but the overhead for all
those TCP connections (and pgpool managing them) worries me a bit.

Furthermore, and much more serious, I see no way to ensure
graceful degration in case of overload.

Let's assume we run a local pgpool instance along with django on each
machine. Django goes through the local pgpool for database access.

Now what happens when the database becomes too slow to
keep up with requests for any reason?

I see two options:

a) pgpool is configured without a limit on inbound connections;
   the hanging connections between django and pgpool will
   eventually exhaust the total number of allowed tcp-
   connections for the django-user or even systemwide.

   django will not be able to open new database connections and
   display nasty error pages to the users. Worse yet, if django
   and webserver are running under the same uid then the webserver
   will likely no longer be able to accept new inbound connections
   and the users get funny error messages from their browsers.

b) pgpool is configured with a limit on inbound connections;
   pgpool will hit the limit and refuse subsequent attempts from
   django, which in turn displays nasty error pages to users.

In order to achieve the desired behaviour of django slowing down
gracefully instead of spitting error pages I think we'd have to
teach django to retry database connections. But this would
open a whole new can of worms, such as risking duplicated
requests when users hit reload, etc...

So, long story short, I see no way out of this without
proper connection pooling built right into django.
Or am I missing something?


-mark



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to