[I sent this yesterday morning, but I don't think it made to the list. Trying once more.]

Right off the bat, that's a big stack size to be using.

I'm assuming you're on a 32-bit machine? If so, then the max addressable space of your process is 2G, which includes the java heap plus overhead needed for (what I call) native memory allocation, which includes memory needed for thread allocation. It doesn't matter that your machine has 3.3G of RAM if it's 32-bit.

So, with a 1500M heap, you're only leaving about 500M for the JVM and other native memory allocation. Personally, we run our resin servers (which are on Windows) with a 128K stack size with no problems and, when we were still on 32-bit, it bought us a lot of time while we finished upgrading to 64-bit.

It's best practice to set ms and mx (and using -server, which is passed to resin as -J-server as the first parameter) because it tells java to grab the entire needed amount of heap right when it starts.

What I surmise might be happening to you is that your java process starts with 64M (if running in client) or 128M (if running in server) when it starts because you're not specifying the ms value. But each thread is consuming 4M of native memory, so when your heap tries to grow beyond a certain size, trying to reach that 1500M limit, it can't get there because too much of the 2G of addressable space is being consumed by your threads.

Rob

On Apr 7, 2008, at 09:18 , Sandeep Ghael wrote:
Hi Scott,

I work with Andrew and we are still fighting this problem. Thanks for the advice.. we are analyzing the heap dump as you suggest.

For added color to the problem, linked is an image of one of our servers load (courtesy of Munin). The other server behaves similarly, but the two do not manifest the problem in concert (this is a cluster environ with 2 servers). You can see that the mem usage climbs to the point where the server begins to encounter high load. The server load will drop dramatically along with mem usage when either the server is restarted (manually or automatically).

http://sandeepghael.com/ServerMemoryPattern.jpg

I was reading this Caucho resin page on perf tuning of the jvm and have a few questions:
http://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp

1) why is it best practice to set "-Xms and maximum -Xmx heap sizes to the same value". Currently we are setting -Xmx at 1500m with - Xms undefined. 2) I actually experimented with lowering the max heap size to -1024M, and the problem seems to occur faster. We thought that lowering the JVM heap size might prevent OS swap if that was the problem. 3) if -Xss is 4m, and we have 256 max threads, that mean we should account for the OS to commit 4m*256=1G for stack space. Correct? 4) if our machine has 3.3G ram, what is best practice in terms of mem allocation for the JVM vs the rest of the OS?

Our conf file below.

regards,
Sandeep

(clustered environment)

           <!--
                     - The JVM arguments
                    -->
            <jvm-arg>-Xmx1500m</jvm-arg>
            <jvm-arg>-Xss4m</jvm-arg>
            <jvm-arg>-Xdebug</jvm-arg>
            <jvm-arg>-Dcom.sun.management.jmxremote</jvm-arg>

            <!--
                     - Uncomment to enable admin heap dumps
                     - <jvm-arg>-agentlib:resin</jvm-arg>
                    -->

<watchdog-arg>-Dcom.sun.management.jmxremote</watchdog- arg>

            <!--
- Configures the minimum free memory allowed before Resin
                     - will force a restart.
                    -->
            <memory-free-min>24M</memory-free-min>

            <!-- Maximum number of threads. -->
            <thread-max>256</thread-max>

            <!-- Configures the socket timeout -->
            <socket-timeout>65s</socket-timeout>

            <!-- Configures the keepalive -->
            <keepalive-max>128</keepalive-max>
            <keepalive-timeout>15s</keepalive-timeout>


On Thu, Apr 3, 2008 at 11:27 AM, Scott Ferguson <[EMAIL PROTECTED]> wrote:

On Apr 2, 2008, at 8:21 AM, Andrew Fritz wrote:

> Our production servers have their maximum memory set to 2048m.
> Everything is fine for a while. Eventually the java process ends up
> with
> all 2048m allocated. At this point server load starts going up and
> response time gets bad. Eventually request start timing out.
>
> Restarting the server fixes the problem instantly and everything is
> good
> again. Occasionally one of the servers will do this on its own,
> presumably because it reaches the 1m free threshold. That appears to
> be
> to small a margin and a restart is needed well before there is only 1m
> left so I adjusted the minimum free memory from 1m to 24m.
>
> That seems like a bandage though. The heap dump returned a blank
> page so
> I'm not sure what was going on there. I'm just curious if anyone has
> any
> theories about what might be eating up memory over time.
>
> We are using Hibernate and PHP and of course java.

Does the heap dump page work for you in a normal situation, i.e.
before you start running out of memory?

That's really the first place to start looking.  The leaking memory
might be obvious from Resin's heap dump page.  If it's not enough
information, the next step would be to use a more sophisticated memory
profiler.

-- Scott

>
>
> Andrew
>
>
>
> _______________________________________________
> resin-interest mailing list
> resin-interest@caucho.com
> http://maillist.caucho.com/mailman/listinfo/resin-interest



_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

Reply via email to