On 02/20/2012 12:56 PM, Jens Dueholm Christensen (JEDC) wrote:

Lately we are seeing increased blocking behaviour from log4j when Resin 
performs the nightly rollover of logfiles.

We’re running Resin Pro 3.1 (license #1013826) in a pre .11 snapshot 
(resin-pro-3.1.s110225) (due to bug #4349 and a bit of waiting before 3.1.11 
was released).
Updating to 3.1.12 is planned, but the changelog 
(http://www.caucho.com/resin-3.1/changes/changes.xtp is the best I could find?) 
does not mention any fix for our current problem, so I’m looking for any 
insights or advise.

I'd need to look at the log4j code to understand what it's doing.

I don't think that wait() should be related to Resin (other than timing) but I'd need to look at their code to be certain, and I can't think of any logging that would help. (Logging the logging code is tricky :)

3.1.13 should be out next week, by the way.

-- Scott

We’ve configured resin to perform rollovers of this logfile:

       <stdout-log path="/usr/local/www/xact/logs/resin-web-stdout.log"
         timestamp="[%Y-%m-%d %H:%M:%S.%s]" rollover-period="1D"/>

In our log4j configuration we log to stdout with:

    <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
       <param name="Target" value="System.out"/>
       <layout class="org.apache.log4j.PatternLayout">
          <param name="ConversionPattern" value="%d{ISO8601} [%-20.20t] %5p %c{1} - %m 
- %x%n"/>

    <appender name="ASYNC" class="org.apache.log4j.AsyncAppender">
       <appender-ref ref="CONSOLE"/>

       <priority value="warn"/>
       <appender-ref ref="ASYNC"/>


Some time after midnight when Resin performs the rollover, the JVM becomes 
unresponsive and a stacktrace shows multiple threads hanging, and we have to 
manually restart (it doesn’t get picked up by the watchdog, and can/will hang 
for hours).

A complete JVM stacktrace is attached to this mail.

All threads that are attempting to log are waiting for access to the object 
blocked by this thread:

"http-$1663241944" daemon prio=10 tid=0x00007f8804005800 
nid=0x2a40 in Object.wait() [0x00007f87a08ed000]
    java.lang.Thread.State: WAITING (on object monitor)
            at java.lang.Object.wait(Native Method)
            at java.lang.Object.wait(Object.java:485)
            at org.apache.log4j.AsyncAppender.append(AsyncAppender.java:195)
            - locked<0x00000005ba882928>  (a java.util.ArrayList)
            - locked<0x00000005b8d6db00>  (a org.apache.log4j.AsyncAppender)
            at org.apache.log4j.Category.callAppenders(Category.java:206)
            - locked<0x00000005ba6f9e38>  (a org.apache.log4j.spi.RootLogger)
            at org.apache.log4j.Category.forcedLog(Category.java:391)
            at org.apache.log4j.Category.info(Category.java:666)

It seems like something never never calls notifyAll() on the ArrayList, and as 
a result our world grinds to a halt..

https://issues.apache.org/bugzilla/show_bug.cgi?id=38137#c16 (comment #16) has 
the same kind of stacktrace as we are seeing, and the comments in #17 does 
point towards Resin.

The last line in logfile resin-web-stdout.log.20120218 (before rotation) has 
the timestamp 2012-02-18 23:59:59,718.
The first line in the new logfile resin-web-stdout.log.20120219 has the 
timestamp 2012-02-19 01:30:48.878 (here we actually waited to see if the 
problem corrected itself – but alas) which was when we did a restart.
All other logfiles (watchdog, jvm, resin stdout and stderr) does not contain 
any indication of what is wrong.

If it’s not Resin causing the problem, I guess we’ll just have to switch our 
entire log4j setup to use org.apache.log4j.rolling.RollingFileAppender (part of 
log4j companions), but we’d prefer to stick with the existing setup.

Are there any interesting Resin loggers (and loglevel) that could show 

Any thoughts or possible hints are appriciated.

Jens Dueholm Christensen
Rambøll Survey IT

resin-interest mailing list

resin-interest mailing list

Reply via email to