On Jun 11, 1:56 pm, "[email protected]" <[email protected]> wrote:
> On Jun 11, 5:17 am, Oveek <[email protected]> wrote:
>
> > The upshot is that on either mysql or postgre with default setings,
> > you are going to hit your max connection limit in under 200 requests.
> > If the max_connections setting was sky high, eventually memory would
> > be the limiting factor. On my test system it takes between 8 and 18
> > new TiddlyWeb requests / new database connections to consume one
> > additional MB of memory.
>
> I can start doing similar testing on my end with mysql and hopefully
> we'll meet in the middle.
Oveek, have a look at this commit:
http://github.com/tiddlyweb/tiddlyweb/commit/8c3c8371718df8275350b3c32164a0a3f7f2da4d
and install Tiddlyweb 0.9.39 (just released) and see if that changes
the behavior at all.
Basically what I did to get to this point was try to determine why the
store was not going out of scope, and while doing some logging
discovered that the StoreSet middleware is only initialized once per
process, but every time __call__ is called a new Store is created. So
I changed it so only one is created.
What this should do is make sure only one sqlstore is created per
process, and is reused per request, which means that the connection
pool is reused.
But I don't have a good testing scenario so I'm sure.
BTW: I've also committed a change to the sql store to ensure
tiddler_written is called in tiddler_delete and tiddler_put. This
shouldn't impact your testing, but was important for the use of
cachinghoster on peermore.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"TiddlyWikiDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/TiddlyWikiDev?hl=en
-~----------~----~----~----~------~----~------~--~---