Another interesting message....
-----Original Message-----
From: Fernando Padilla [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 26, 2005 5:04 PM
To: Tapestry users
Subject: Re: Memory / Caching Issues
A few weeks ago we had a memory leak that we thought was caused by
Tapestry, but was in fact caused by us confusing Tapestry. We had gone
and overriden the getLocale() method in the BasePage to return a "new
Locale( "en_US" )" which totally confused Tapestry's PagePool, so that
it never recognized the PagePool key and recompiled every page on every
request.. we discovered this after we had it in production.. arg. We
simply replaced it to return "Locale.US", and it was fine.
I've been meaning to write to the list what I've learned, so it looks
like this is a good a time as any. So this is how I see it. Could
someone check my logic?
(I've been using Tap 4)
-- GeneratedMethodAccessor --
Because of Java VM issues, every time there is some Classloader/Bytecode
magic it creates an object that can not be garbage collected. This will
supposedly be fixed in the next version of java. (** I read this on an
online blog after I googled for "GeneratedMethodAccessor", the only
thing useful that I found. **)
Thus the number of pages compiled by Tapestry has a permanent impact on
the memory footprint of the application.
-- PagePool --
Tapestry keeps a cache of compiled pages in it's PagePool so that it can
reuse compiled pages. It does this with a unique key of "pagename and
locale". The PagePool implementation compiles a new page if there
aren't any in the pool.
If you have 100 concurrent requests for the same page, then it will
compile 100 copies of the page. If you then have 100 concurrent
requests for the same page, but with a different Locale, it will compile
another 100 copies.
-- PagePool lacking --
It seems like the number of pages compiled by Tapestry has a permanent
impact on the memory footprint for another reason. It seems like
Tapestry's PagePool does not release "idle" pages. (** This is only what
it looks like through profiler, haven't confirmed in code; anyone want
to confirm? **) Thus the PagePool will grow in order to handle an flash
flood of concurrent requests, but will not release those pages after the
abnormal load dissipates.
-- PagePool enhancement --
Anyhow, maybe one of us using Tapestry for high profile production
systems should take some time to make a better PagePool implementation:
- more tuning parameters
- release of unused/idle pages
- limit the unbounded compile of pages based on concurrent requests
- maybe have a slow growth pattern with blocking, limiting the
concurrent request handling on flash mobs, but avoiding the real
possibility of out of memory situations..
Joel Trunick wrote:
> All
>
> We are currently experiencing some bizarre memory issues in a tapestry
> application. This application uses a lot of reflection via OGNL and
> class generation via CGLIB. What happens is that after a period of
> time, we have so many pinned objects in memory that large memory
> allocations start failing due to fragmentation and we get the infamous
> OutOfMemoryException even though there is plenty of total heap space.
>
> The pinned objects we appear to be leaking are mainly
> GeneratedMethodAccessorXXX CLASSES, although there are some
> GeneratedConstructorAccessorXXX classes as well. Each such class
> appears to have a corresponding DelegatingClassLoader, and all three
> of these classes are generated by the Java 1.4 reflection code. Note
> the GeneratedMethodAccessorXXX are CLASS objects not actual instances.
> HPROF shows that there are no breaks in the XXX numbers in the class
> names (they are sequential from 1 through the highest, like 980 during
> one simplistic run).
>
> However, we also need to point out that there is a single instance of
> each of these classes for all classes with numbers above a certain
> point. For example, in one run, HPROF shows that we have class
> objects for GeneratedMethodAccessor1 through
> GeneratedMethodAccessor980, but we have a single instance of classes
> GeneratedMethodAccessor557 to GeneratedMethodAccessor980, inclusive
> (meaning there are no instances of
> 1 through 554). Now, you might think that we just haven't garbage
> collected these yet, or they are on a cache, etc. However, there are
> three problems with that argument:
>
> a) We have run GC at least 50 times and the lowest-numbered
> GeneratedMethodAccessor object has *never* disappeared, even with
> continued running of the application,
> b) if there was a cache, it certainly should be limited to some more
> sane number like 100, or even 250, and
> c) if there is a cache, there should be some way to control the size
> of it.
>
> The environment we are running in is:
> WebSphere 5.1.1.4
> IBM JVM 1.4.2
> AIX and Windows
> Tapestry 3.0.3
>
> We've been using a combination of Verbose GC, JProfiler and HPROF to
> see what's going on, but we have had problems getting all the details
> we need to successfully diagnose this problem. We have no evidence
> that this is a Tapestry problem, but wanted to poll the forum to see
> if anyone has seen something similar. One thing that we have seen
> with JProfiler is that OGNL appears to have a couple HashMap caches
> that hold on to Method objects, but the Javadoc for OGNL has no
> mention of a cache at all, so we wonder if there is something
> undocumented we could be doing to OGNL to convince it to release these
> Methods. Another thing to note is what classes are part of which
> ClassLoader. Currently ALL dependencies of our app reside in the EAR
> classloader space, and all the application specific classes (and CGLIB
generated classes) are part of
> the web application classloader. Finally, the thing that we fund
most
> frustrating about this problem is it's lack of predictability. After
> running the app for a while, we will run through one pass and see the
> classes grow only by 1. We run the exact same pass through the app
> again, and we will see the classes grow by 28. There is no true
> discernable pattern - even as we traverse from one page to the next in
> our app, a page transition that has caused zero classes to be created
> in the last 15 passes will suddenly create 12 new classes in just one
pass.
>
> Thanks in advance!
> --m
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]