heres another patch that is probably better, does not require an explicit step on your part as it does a cleanup pass, checked in to rev 1765.  see if this one works out, a test program on this end created 50 threads and maintained a size of 5 connections.  fortunately sqlite is pretty casual/fast about connections and we probably could just have a straight connect/disconnect module with more or less the same performance, if it werent for :memory: connections.

Index: lib/sqlalchemy/pool.py
===================================================================
--- lib/sqlalchemy/pool.py      (revision 1764)
+++ lib/sqlalchemy/pool.py      (working copy)
@@ -184,10 +184,11 @@
class SingletonThreadPool(Pool):
     """Maintains one connection per each thread, never moving to another thread.  this is
     used for SQLite."""
-    def __init__(self, creator, **params):
+    def __init__(self, creator, pool_size=5, **params):
         Pool.__init__(self, **params)
         self._conns = {}
         self._creator = creator
+        self.size = pool_size
     def dispose(self):
         for key, conn in self._conns.items():
@@ -203,7 +204,16 @@
             del self._conns[thread.get_ident()]
         except KeyError:
             pass
-                   
+   
+    def cleanup(self):
+        for key in self._conns.keys():
+            try:
+                del self._conns[key]
+            except KeyError:
+                pass
+            if len(self._conns) <= self.size:
+                return
+                                   
     def status(self):
         return "SingletonThreadPool id:%d thread:%d size: %d" % (id(self), thread.get_ident(), len(self._conns))
@@ -222,6 +232,8 @@
         except KeyError:
             c = self._creator()
             self._conns[thread.get_ident()] = c
+            if len(self._conns) > self.size:
+                self.cleanup()
             return c
    
class QueuePool(Pool):


On Aug 5, 2006, at 2:11 AM, [EMAIL PROTECTED] wrote:

We're about ready to go to production with our new website (built using pylons, sqlalchemy 0.2.6/sqlite, and kid), so I thought I would point apachebench at it to see how things performed.  I was horrified to see memory consumption going through the roof (i.e. >100MB in less than 10,000 requests!).

I did a bunch of profiling, and eventually found that *every* database connection was being leaked.  This was confirmed by adding this line to our base controller's __call__ function, which showed the pool's size growing mercilessly (`sqlite` is a reference to the database engine returned by create_engine):

    sqlite.engine.connection_provider._pool.status()

Well, after tearing my hair out on this one, I finally came across the dispose() function on the connection pool.  Adding this line to our base controller (after all of the request processing has completed) results in proper functionality, and the pool's size never gets above 10 (even when it's getting pounded by apachebench):

    sqlite.engine.connection_provider._pool.dispose()

Now, we're creating our database engine simply enough:

    sqlite = create_engine(appconfig.get('sqlite.dburi'), strategy='threadlocal')

And I believe our usage of SA is straightforward enough (modelled after zblog's usage, actually):

    query = create_session(bind_to=sqlite).query(MyMappedClass)
    instances = query.select_by(name='blahblah')

We don't hang onto any ORM-ed instances at all -- indeed, if we don't include the pool.dispose() function call after each request, we can cause the connection leak behaviour within a no-op request handler (one that only runs a query using SA -- no templating or other logic), so I can't imagine that we're somehow actively causing the connections to be retained via loaded mapped class instances.

Given this, I certainly didn't expect that calling the pool.dispose() function would be necessary for the sqlite connections to be cleaned up properly -- indeed, this is either a bug, or we're doing something very wrong.

Thoughts?  is the pool.dispose() call at the end of each request a reasonable stopgap, or will things break in unexpected ways if that function is used?  Further, does anyone have any suggestions on what we might be doing wrong that our connection pool is growing like a weed?

Thanks!

Charlie
[EMAIL PROTECTED]


Do you Yahoo!?
Next-gen email? Have it all with the all-new Yahoo! Mail Beta.
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV_______________________________________________
Sqlalchemy-users mailing list
Sqlalchemy-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Sqlalchemy-users mailing list
Sqlalchemy-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users

Reply via email to