Okay I've been doing some testing. I think your latest commit in the
StoreSet has hit the problem at its source, but I'm seeing some things
that still aren't making sense.

Going back to the earlier posts first...

On Jun 11, 8:38 pm, "[email protected]" <[email protected]> wrote:

> One option would be for the wsgi middleware that established the
> tiddlweb.store entry in the environ to do session removal on the way
> back up the stack.

I was thinking that, but wasn't sure where to do it. That session
closing pseudo code you suggested in the StoreSet does work--the
majority of connections clean up properly--but there are some cases
where _close_connection mysteriously never gets called and connections
marked 'idle in transaction' are left behind. I don't think we need to
go this route so I'll leave it for now.

> When I was writing the first few rounds of sql.py and adjusting
> the tests to work with it, I found all sorts of problems with how
> things are scoped, so here we seem to have another one.

The scope and lifetime of objects is something I've been wondering
about for a while. Based on some of the logging I did, I got the
impression none of the objects in the stack persist across requests,
which I found kind of odd

> What ought to happen is that the store object stays in scope all the
> way out to to the top of the WSGI stack and then gets finalized. But...
> it wouldn't surprise me if a reference is being kept somewhere.

It turns out the __del__ method is not a good indicator of an object's
status because if I understand correctly, according to Python docs
there is no guarantee that it will be called even on interpreter exit.
Do you know a way to find out when an object gets finalized?

> Are you doing your tests in mod_python, mod_wsgi or the built in
> server? They each may have different handling of the WSGI environment.

All my testing has been under mod_wsgi. In the very beginning I was
using CherryPy, but that was quite a while ago.

> The perfect solution is one where a single connection pool of suitable
> size is created and used by all the subsequent requests. This should
> be possible with proper scoping of some variables. A singleton of some
> type holding onto session and engine stuff is probably the way to go.

I agree this is the best solution. One point worth noting, I think
using a connection pool has a higher memory overhead on average than
closing sessions per request.

> I am using sqlite on peermore and I think it basically doesn't use
> connections at all: it's just file operations with fancy semaphores
> for concurrency handling. I think. The server has 168MB of RAM so it's
> not that there's tons of space.

That's good news for me. How much of that RAM is usually being used?
I'm curious how well things will hold up under load. Based on the
tests on my laptop, I'm going to have to do considerable tuning,
because a heavy load with connection pooling enabled is causing memory
consumption to jump a lot when there are many concurrent requests.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TiddlyWikiDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/TiddlyWikiDev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to