On 7/25/06, Jesse Kuhnert <[EMAIL PROTECTED]> wrote:

P.S. Without knowing completely how hivemind/this is handling pooling my
first gut instinct reaction is that it might not necessarily be a pooling
problem? I'm not saying it isn't, but if you look at it in really
simplistic
terms the pool size should only reflect the highest number of concurrent
requests you have had come in.

If you see your pool sizes growing and growing incrementally over time
with
no direct correlation to # of concurrent requests then you may have
something.


Yes this is exactly what is happening... This works fine until the heap gets
to the limits of the -Xmx (this seems to happen after the app has run to the
max number of concurrent requests), then the VM starts to spend all its time
doing gargabe collection. I couldn't figure out why I never saw the out of
memory happening in the quiet times but now this makes lots of sense. The
memory gets loaded during the day (peak time) and the memory never gets
released after. Also my other guess is that I did not see this problem with
tapestry 3 since we were using a filter that queued requests for each user.

In this particular case, I don't see anything complex... This is a tapestry
pool (not part of hivemind) and doesn't seem to be connected to anything
else outside.

In PageSource:

...
IPage result = (IPage) _pool.get(key);

*if* (result == *null*)

{... the page gets created

It would be really easy to use WeakReferences in the pool. I wouldn't
replace completly the pool but I would rather add a "WeakPool" in
Infrastructure where all the objects are hold by weak references. I'm going
to give it a shot... I can probably just override the PageSource hivemind
part and give my own pool reference.

Even test wise, there isn't much to do for using weak references... The
current ObjectPool API can return null from a key and all the users of the
pool will create a new implementation.

Thanks,

Henri.

Reply via email to