Got more details after some investigating. I should have mentioned I'm running this test setup under Linux.
First I should make an important ammendment to my statement about "rapidly eating up available memory." In reality available memory is eaten up very slowly once Apache has allocated memory for all its listener processes. To clarify: the first few requests to TiddlyWeb cause large jumps in memory usage because Apache allocates memory for its sleeping listener processes, around 8 or 9 megs per process in my case. Once the available listeners are all active, you don't see the large jumps in memory usage after each request. With the listeners active, all subsequent requests to TiddlyWeb cause a very small but steady increase in memory usage (due to creation of new database connections). Really the increase is so small that it will likely take thousands of requests before there is an actual problem due to memory running out. But before that happens the database will reach its max_connections limit (defaults to 100 in postgre). The upshot is that on either mysql or postgre with default setings, you are going to hit your max connection limit in under 200 requests. If the max_connections setting was sky high, eventually memory would be the limiting factor. On my test system it takes between 8 and 18 new TiddlyWeb requests / new database connections to consume one additional MB of memory. Now about resolving this problem...these TiddlyWeb requests essentially need to close the door on the database connections on their way out. I have a hack in place but it's got some problems. First another correction. The Session.remove() method is only available in a different session type, in our case the method to close the connection involves disposing the engine. I found out how to do this on sqlalchemy's GG here: http://groups.google.com/group/sqlalchemy/browse_thread/thread/e040a586c8ffe30?tvc=2&q=%09[sqlalchemy]+How+to+totally+close+down+db+connection The second post in that thread has a list of calls that can be used to fully close down a connection opened for a session. As a test I added a method, _close_connection(), to the sql Store() class. I then added a call to _close_connection() to the end of the various handlers like recipe_get(), recipe_put(), tiddler_get(), etc. Adding the calls had the desired effect of closing the connections immediately after each request. Still the question remains where is the best place to do this connection closing? It looks to me there are generally two ways to handle it. One in the sqlstore, where each handler does its own closing; or two outside the store with a single call to the connection closing code. Doing it in the store (the way I did for testing purposes) has some problems... but I don't have time to finish this right now, I'll come back later on and pick up where I left off. On Jun 11, 12:41 am, Oveek <[email protected]> wrote: > On Jun 10, 1:58 am, tony <[email protected]> wrote: > > > So much to learn, so little time > > My feelings exactly. > > On Jun 9, 6:51 pm, "[email protected]" <[email protected]> wrote: > > > Good thing you exist. Thanks. It's fixed and the new code is on > > github. > > Thanks, I echo that sentiment (the first part) at you and the > TiddlyWiki/Web community. > > So my rocky trip through database land continues. > > The test machine I'm using at work has 256 MB of RAM so I've had a > close eye on the number of running processes and memory usage. With > the basic TiddlyWeb environment setup I started adding, deleting, and > editing tiddlers and discovered a problem... > > Every single request to TiddlyWeb is creating a new connection to > postgre and the connections are rapidly eating up available memory. I > did pretty much all my testing in postgre, but a quick check using > mysql revealed the same behavior. > > I reduced the max_connections cap to 10 in postgre to test further. > Since any request (GET, PUT,...) creates a new connection, the > connection limit is hit within a minute or so of doing stuff in > TiddlyWeb. Edit a tiddler 5 times and there will be 5 new processes. > Hitting the connection limit causes sqlalchemy to crash with a > friendly message from the database: > > FATAL: connection limit exceeded for non-superusers > > First I was confused why existing connections weren't being reused. I > looked at the sqlalchemy docs and by default create_engine() uses > connection pooling with pool_size = 5, so there should never be more > than 5 idle processes listening for connections. When I first noticed > the problem I had around 40 separate open connections/processes, most > idle. > > Now I have a good idea why the connections are proliferating. On every > request TiddlyWeb initializes the wrappers, and the Store's __init__ > method is among those invoked. That means every request calls the > Store's __init__ method and creates a new session and database > connection with its own connection pool. The problem is completed > requests leave behind idle sessions that never get reused because > subsequent requests go and create new connections anyway. > > After skimming the sqlalchemy docs on Sessions a bit, I think there > are a couple options. One option I'm confident will work is to clean > up the session created in each request by calling Session.remove() > after the request is done. An alternative (not sure if it's possible) > would be to try to connect a session to an existing connection created > by a previous request. > > Considering the Session.remove() approach, the question is where to > make the call to remove? It needs to be called after the database > accesses are done. One way might be to put the call in a wrapper that > gets called after all Store related work is done. I though it might be > possible to use a destructor, a __del__ method, to do it, but I found > the destructor is not called after each request. That itself raises > some questions. > > Anyway before getting into too much detail about any possible > remedies, I'll wait for your take on the situation. > > interestingly I'm getting the impression that you're not getting this > problem using the SQLite database on peermore, because the site seems > to be doing okay. I think I saw somewhere that SQLite, being a single > file, handles multiple connections differently so that might account > for it. Either that or your server has a lot of memory and the > incremental increase in memory usage hasn't had an effect yet. If you > are getting new connections on each request you may be in for an > unpleasant surprise somewhere down the line. > > On Jun 10, 1:58 am, tony <[email protected]> wrote: > > > On Jun 9, 2:20 am, Oveek <[email protected]> wrote: > > > > Nice work. Any particular reason you used cygwin on Windows? > > > I figured I should test on as many configurations to my avail. > > Plus, unfortunately, computing is imprisoned by MS product in many > > corporations and I didn't know how to work with twanager with DOS or > > Powershell. > > Another consideration is that I carry the sql store on a USB flash > > drive which affords some portability. > > I didn't try this with the text store but I found surprisingly that > > TiddlyWeb with sql works on Chrome2, Safari4, FF3 with no need for jar > > files. :-) > > The only bits missing for me are having the ability for import/export/ > > sync with sqlite, learning up in revisions/roles/policies/multistore > > and possibly interoperability with TiddlyWiki and Cocoa touch. > > > So much to learn, so little time > > > be fun! > > > Best, > > tony --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "TiddlyWikiDev" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/TiddlyWikiDev?hl=en -~----------~----~----~----~------~----~------~--~---
