I trick that I use to trouble shoot perm gen memory leak involves using
jconsole and , jmap, and jhat.  I would attach to resin with jconsole, then
reload the webapp a few times to trigger the perm gen leak.  That I will
stop the webapp completely, and then go to the memory tab to trigger some
full GC.  What I found is that I would need to trigger a few full GC back to
back in order for the garbage collector to actually clean up the perm gen.
At this point I take a heap dump with jmap and look at the heap dump with
jhat.  The webapp had been stopped and perm gen had been clean so if there
is no leak, classes loaded by the webapp classlloader should not be in the
heap dump anymore.  So if you still see them it mean there is a perm gen
leak.  In jhat the detail page for any leaked classes has a link to the
classloader which loaded that c lass.  Follow that link and you will see a
link for the reference chains from rootset (exclude weak ref) for the leaked
classloader.  That page will show you where the leak is.

Bill

On Fri, Jan 9, 2009 at 3:22 AM, Mattias Jiderhamn <mj-li...@expertsystems.se
> wrote:

> Scott Ferguson wrote (2008-11-26 16:53):
> >
> > On Nov 24, 2008, at 11:08 PM, Mattias Jiderhamn wrote:
> >
> >> I'm still battling this PermGen leak and frankly I'm really starting
> >> to doubt that I know what I'm doing anymore. I'd be very happy if
> >> anyone would care to explain that to me...
> >>
> >> Since my last post Scott and I have discussed potential class loader
> >> leaks and some of them have been fixed in the 3.1.8 release. It seems
> >> there is (at least) one leak that didn't get fixed in 3.1.8. I have
> >> made a quick and dirty patch to avoid that leak. If anyone would care
> >> to try, the patch (which includes a few other things probably fixed
> >> in 3.1.8 already) can be found here:
> >> http://jiderhamn.se/resin-leak.patch
> >>
> >> However, even with that patch, it seems there is still some kind of
> >> PermGen leak that eventually leads to OutOfMemoryError. I have
> >> created a small application with the sole purpose of detecting these
> >> leaks. If anyone would care to try, it can be found here (sources
> >> included): http://jiderhamn.se/leak.war
> >> You will need to add some JARs to the WEB-INF/lib directory;
> >> preferrably a couple of large ones like spring.jar and hibernate.jar
> >> (don't use Resin JARs though).
> >> Then just drop the WAR in a clean installation of Resin 3.1.8
> >> (preferrably patched with the patch above).
> >> Hit http://...:nn/leak (once is enough)
> >> Force a redeploy by either deleting the webapps/leak dir or touch:ing
> >> leak.war
> >> Hit http://...:nn/leak again
> >> Repeat the last two steps for as long as you'd like
> >>
> >> What you should see - or at least what I see on one Linux machine and
> >> one Windows machine - is the (ClassLoadingMXBean) loadedClassCount
> >> and the (MemoryPoolMXBean) Used Perm Gen steadily increasing (while
> >> the unloadedClassCount remains pretty stable) for every redeploy,
> >> which indicates a classloader leak. But I just can't find that leak.
> >
> > Thanks.
> >
> > Right now, our code base is a bit stuck due to the WebBeans/OSGi
> > upgrade (for Resin 4.0.0, was 3.2.2).  Once that's cleaned up and I
> > can put up a snapshot I can take a look.
>
> Scott Ferguson wrote (2008-12-29 20:41):
> > I've just made an early Resin 4.0 snapshot available.
>
> Does this mean we can assume you will be able to look more closely at
> the memory leaks in 3.1 sometime soon...?
>
>  /Mattias
>
>
>
> >
> > -- Scott
> >>
> >>
> >> Now, here are the things really bugging me:
> >> 1. If I keep redeploying over and over, I will eventually get closer
> >> and closer to the Perm Gen Max (in some instances, I have seen the
> >> following behaivour instead turn up when Used reaches Init if Init is
> >> large enough). Then suddenly the unloadedClassCount is increased, but
> >> not with all the unused classes - only about the amount of one
> >> redeployment. Redeploy again, and it will increase another step.
> >> Meanwhile, the loadedClassCount remains pretty stable, since we are
> >> loading as many new classes as are unloaded. It's as if there was a
> >> FIFO queue/LRU cache of classloaders, so that the oldest one is
> >> garbage collected once there is not enough space for a new one.
> >> However, after a while there is (assumably) not enough space to
> >> create the new classloader before the old one is garbage collected,
> >> and I get OutOfMemoryError somewhere in the middle. Sometimes I am
> >> actually able to recover from this error by waiting for the GC to do
> >> it's job and then just try again.
> >>
> >> 2. Now I attached YourKit, looking for dangling classloaders as of
> >> the inital post. I found none. In fact, the Classes without Instances
> >> inspection only shows the classes in the added JARs from the last
> >> redeployment, so when tracing back to GC root, it goes via the
> >> current EnvironmentClassLoader which is correct. There are also no
> >> excessive instances of EnvironmentClassLoader. Hmm... Now wait a
> >> minute. Look at the total number of java.lang.Class objects. It does
> >> not match with the totalLoadedClassCount. In fact, the total number
> >> of classes found by YourKit is about the same as the
> >> totalLoadedClassCount on the very first hit of the application,
> >> before any redeployments. So from YourKits point of view, there is no
> >> classloader leak! But why then isn't the PermGen space reclaimed.
> >> This led me to wonder if there was some kind of JVM bug. (As a side
> >> note, I have yet to try with some other profiler)
> >>
> >> 3. So, I modified the test application (see commented out code in
> >> MyServlet.java) to load the classes of the JARs in a regular
> >> java.net.URLClassLoader which is then immediately thrown away. No
> >> leak. loadedClassCount is immediately decreased (and
> >> unloadedClassCount increased), as is the Used Perm Gen. That is, it
> >> behave the way we want the redeployment to behave.
> >> Ok, lets take one step down in the classloader hierachy and load all
> >> the classes via a disposable com.caucho.loader.DynamicClassLoader. No
> >> leak.
> >> So, what if I load them with a
> >> com.caucho.loader.EnvironmentClassLoader which is then destroy()ed
> >> and left for garbage collection. The leak is back!
> >> EnvironmentClassLoader has now applied to become prime suspect. In
> >> order to track things down I subclassed EnvironmentClassLoader inside
> >> my application and planned to make changes there. Well ... before any
> >> changes, the leak is nowhere to be found.
> >>
> >> What is going on here?????
> >> Anything that might help my sanity would be appreciated.
> >>
> >>  /Mattias Jiderhamn
> >>
> >>
> >> Mattias Jiderhamn wrote (2008-11-05 06:43):
> >>> In support of this latest theory is the fact that YourKit shows two
> >>> GC roots for the HttpRequest. Apart from the
> >>> com.caucho.server.port.TcpConnection._request reference, the request
> >>> is a root itself by being on the stack of a thread (http--8080-...).
> >>> This could indicate a thread currently waiting in the
> >>> com.caucho.server.port.TcpConnection.run() method, where the
> >>> ServerRequest of the parent is also a local variable.
> >>>
> >>> Even if the request is reused, I believe the Invocation or the
> >>> ClassLoader of the invocation needs to be reset somehow, to release
> >>> the webapp classloader.
> >>> I can help try out a proposed fix to see if it solves the problem (=
> >>> feel free to mail me off list). I might even give it a shot myself.
> >>>
> >>>  /Mattias
> >>
> >
>
>
>
>
> _______________________________________________
> resin-interest mailing list
> resin-interest@caucho.com
> http://maillist.caucho.com/mailman/listinfo/resin-interest
>
_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

Reply via email to