So after doing some more research, and comparing the profiles of one
server (just restarted) to the other server in the loaded state I've
got a lead so to speak, but I'm not sure what it means...|
On the loaded server about 34% of the time is spent in "readNative"
compared to 80%+ on the unloaded server. On the loaded server
is using up 27% of the run time.
The stack trace is:
Which appears to be Hibernate related. Our "ShallowSongBO" is also the
third item on the heap dump, right behind String and char (which is
pretty insane). I can't conceive of why these might be hanging around,
but the ShallowSongBO is loaded regularly from php.
Anyone have any thoughts about what in this combination might be
causing as persistant object to hang around beyond the requests life
time if I'm closing out the hibernate session correctly?
Sandeep Ghael wrote:
I work with Andrew and we are still fighting this problem. Thanks for
the advice.. we are analyzing the heap dump as you suggest.
added color to the problem, linked is an image of one of our servers
load (courtesy of Munin). The other server behaves similarly, but the
two do not manifest the problem in concert (this is a cluster environ
with 2 servers). You can see that the mem usage climbs to the point
where the server begins to encounter high load. The server load will
drop dramatically along with mem usage when either the server is
restarted (manually or automatically).
I was reading this Caucho resin page on perf tuning of the jvm and have
a few questions:
1) why is it best practice to set "-Xms and maximum -Xmx heap sizes
to the same value". Currently we are setting -Xmx at 1500m with -Xms
2) I actually experimented with lowering the max heap
size to -1024M, and the problem seems to occur faster. We thought that
lowering the JVM heap size might prevent OS swap if that was the
3) if -Xss is 4m, and we have 256 max threads, that mean we should
account for the OS to commit 4m*256=1G for stack space. Correct?
4) if our machine has 3.3G ram, what is best practice in terms of mem
allocation for the JVM vs the rest of the OS?
Our conf file below.
- The JVM arguments
- Uncomment to enable admin heap dumps
- Configures the minimum free memory allowed before Resin
- will force a restart.
Maximum number of threads. -->
Configures the socket timeout -->
Configures the keepalive -->
On Thu, Apr 3, 2008 at 11:27 AM, Scott
Ferguson <[EMAIL PROTECTED]
Does the heap dump page work for you in a normal situation, i.e.
On Apr 2, 2008, at 8:21 AM, Andrew Fritz wrote:
> Our production servers have their maximum memory set to 2048m.
> Everything is fine for a while. Eventually the java process ends up
> all 2048m allocated. At this point server load starts going up and
> response time gets bad. Eventually request start timing out.
> Restarting the server fixes the problem instantly and everything is
> again. Occasionally one of the servers will do this on its own,
> presumably because it reaches the 1m free threshold. That appears
> to small a margin and a restart is needed well before there is
> left so I adjusted the minimum free memory from 1m to 24m.
> That seems like a bandage though. The heap dump returned a blank
> page so
> I'm not sure what was going on there. I'm just curious if anyone
> theories about what might be eating up memory over time.
> We are using Hibernate and PHP and of course java.
before you start running out of memory?
That's really the first place to start looking. The leaking memory
might be obvious from Resin's heap dump page. If it's not enough
information, the next step would be to use a more sophisticated memory
resin-interest mailing list
resin-interest mailing list