I think it may of been 6676016:

http://java.sun.com/javase/6/webnotes/6u10.html

We were able to repro at the time this through heavy lucene indexing + our internal document pre-processing logic that churned a lot of objects. We have still experience similar issues with 10 but much rarer. Maybe going to 13 may shed some light, you could be tickling another similar bug but I didnt see anything obvious.

C


On May 9, 2009, at 12:30 AM, Stefan Will wrote:

Chris,

Thanks for the tip ... However I'm already running 1.6_10:

java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

Do you know of a specific bug # in the JDK bug database that addresses this
?

Cheers,
Stefan


From: Chris Collins <ch...@scoutlabs.com>
Reply-To: <core-user@hadoop.apache.org>
Date: Fri, 8 May 2009 20:34:21 -0700
To: "core-user@hadoop.apache.org" <core-user@hadoop.apache.org>
Subject: Re: Huge DataNode Virtual Memory Usage

Stefan, there was a nasty memory leak in in 1.6.x before 1.6 10.  It
manifested itself during major GC.  We saw this on linux and solaris
and dramatically improved with an upgrade.

C
On May 8, 2009, at 6:12 PM, Stefan Will wrote:

Hi,

I just ran into something rather scary: One of my datanode processes
that
I’m running with –Xmx256M, and a maximum number of Xceiver threads
of 4095
had a virtual memory size of over 7GB (!). I know that the VM size
on Linux
isn’t necessarily equal to the actual memory used, but I wouldn’t
expect it
to be an order of magnitude higher either. I ran pmap on the
process, and it
showed around 1000 thread stack blocks with roughly 1MB each (which
is the
default size on the 64bit JDK). The largest block was 3GB in size
which I
can’t figure out what it is for.

Does anyone have any insights into this ? Anything that can be done to
prevent this other than to restart the DFS regularly ?

-- Stefan



Reply via email to