On Oct 28, 2011, at 12:53 PM, Mike Orr wrote:
> I have a few Pylons applications that share a SQL access log routine.
> Yesterday I migrated it from MySQL to PostgreSQL, and I'm getting a
> bunch of errors like this:
>
>>> connection = self.contextual_connect(close_with_result=True)
> Module sqlalchemy.engine.base:1229 in contextual_connect
>>> return self.Connection(self, self.pool.connect(),
>>> close_with_result=close_with_result, **kwargs)
> Module sqlalchemy.pool:142 in connect
>>> return _ConnectionFairy(self).checkout()
> Module sqlalchemy.pool:304 in __init__
>>> rec = self._connection_record = pool.get()
> Module sqlalchemy.pool:161 in get
>>> return self.do_get()
> Module sqlalchemy.pool:631 in do_get
>>> raise exc.TimeoutError("QueuePool limit of size %d overflow %d reached,
>>> connection timed out, timeout %d" % (self.size(), self.overflow(),
>>> self._timeout))
> TimeoutError: QueuePool limit of size 5 overflow 10 reached,
> connection timed out, timeout 30
As the problem appeared when moving from MySQL to PG, this almost certainly has
nothing to do with pool size and everything to do with Postgres' much more
aggressive locking behavior. My advice would be to do a "ps -ef | grep post"
on your database server, and look for any lines that say "Idle in transaction".
Those would be PG connections that are blocking on a lock. A system would
normally have none of those present long enough to actually show up in a ps
listing. Assuming that's the case you'd then need to fix your applications so
that they release connections immediately when they've completed their work
within a given transaction.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.