Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-06 Thread Ralph Goers
If you know what the cause is please add that information to the Jira issue.  
I’d really like to understand what is causing this.

Ralph

> On Jul 6, 2016, at 7:06 AM, Leon Finker  wrote:
> 
> Based on google searches, it turns out to be a classic class loading 
> deadlock. I found one thread that was inside the class' static initializer: 
> , which then eventually ended up logging indirectly and getting 
> blocked on:
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
>   at 
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
>   at 
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
>   at com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:450)
>   at com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:315)
>   at 
> org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageInfo(AsyncLoggerDisruptor.java:203)
> ..
> 
> And the AsyncLogger thread is blocked trying to walk the above thread stack 
> and getting deadlocked on the same class that was in .  It's a large 
> codebase, so hard to review all the libraries and uses. But now we know what 
> to look out for if it happens again. This can happen with async logging. 
> 
> Basically one thread would be in RUNNABLE state, but will say "in 
> Object.wait()" and another thread would be in .
> 
> On 2016-07-05 20:10 (-0400), Remko Popma  > wrote: 
>> The StackOverflow link may still be relevant if the problem is caused by 
>> class loading (by different classloaders?) from different threads. 
>> 
>> Leon, is there any other thread in the thread dump that is loading a class?
>> 
>> Sent from my iPhone
>> 
>>> On 2016/07/06, at 6:31, Ralph Goers >> > wrote:
>>> 
>>> The stack overflow reference is using Log4j 1, so that isn’t a match.
>>> 
>>> Based on the fact that you are in ExtendedThrowablePatternConverter that 
>>> would imply that you are logging an exception. But I don’t know why you 
>>> would be getting stuck in there. While formatting the exception is slow, it 
>>> shouldn’t be that slow.
>>> 
>>> Ralph
>>> 
 On Jul 5, 2016, at 2:03 PM, Leon Finker  wrote:
 
 This looks similar:
 http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock
 
 But we don't use any other logging framework besides slf4j, log4j2 and 
 log4j2 bridges.
 
> On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
> Hi,
> 
> Using log4j2 runtime args with 2.6.1:
> -DAsyncLogger.RingBufferSize=512
> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
> 
> On CentOS 6.7 and Java 1.8.0_60.
> 
> We noticed that at some point the process has stopped logging to the log 
> file (during startup). When doing 7 thread dumps over 8 minutes the 
> AsyncLogger thread is Runnable, but always in the below stack trace. And 
> all the other threads are TIMED_WAITING to publish new log events 
> RingBuffer.publishEvent. Has anyone seen this before? There was no log 
> entries for at least 25 minutes till we killed the process and restarted 
> it without problems. If AsyncLogger was progressing properly, something 
> would appear in the log file (RollingRandomAccessFile is configured with 
> immediateFlush=true). It's hard to know how big the stack length was in 
> the ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, 
> it's RUNNABLE. Also it doesn't look like there is a way to limit the 
> stack depth for the toExtendedStackTrace?
> 
> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
> java.lang.Thread.State: RUNNABLE
>  at java.lang.Class.forName0(Native Method)
>  at java.lang.Class.forName(Class.java:348)
>  at 
> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
>  at 
> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
>  at 
> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
>  at 
> 

Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-06 Thread Remko Popma
Leon, I would like to see this myself. Is it possible to attach the full thread 
dump or create a Jira ticket with the full thread dump?

Sent from my iPhone

> On 2016/07/06, at 23:06, Leon Finker  wrote:
> 
> Based on google searches, it turns out to be a classic class loading 
> deadlock. I found one thread that was inside the class' static initializer: 
> , which then eventually ended up logging indirectly and getting 
> blocked on:
>at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
>at 
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
>at 
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
>at com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:450)
>at com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:315)
>at 
> org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageInfo(AsyncLoggerDisruptor.java:203)
> ..
> 
> And the AsyncLogger thread is blocked trying to walk the above thread stack 
> and getting deadlocked on the same class that was in .  It's a large 
> codebase, so hard to review all the libraries and uses. But now we know what 
> to look out for if it happens again. This can happen with async logging. 
> 
> Basically one thread would be in RUNNABLE state, but will say "in 
> Object.wait()" and another thread would be in .
> 
>> On 2016-07-05 20:10 (-0400), Remko Popma  wrote: 
>> The StackOverflow link may still be relevant if the problem is caused by 
>> class loading (by different classloaders?) from different threads. 
>> 
>> Leon, is there any other thread in the thread dump that is loading a class?
>> 
>> Sent from my iPhone
>> 
>>> On 2016/07/06, at 6:31, Ralph Goers  wrote:
>>> 
>>> The stack overflow reference is using Log4j 1, so that isn’t a match.
>>> 
>>> Based on the fact that you are in ExtendedThrowablePatternConverter that 
>>> would imply that you are logging an exception. But I don’t know why you 
>>> would be getting stuck in there. While formatting the exception is slow, it 
>>> shouldn’t be that slow.
>>> 
>>> Ralph
>>> 
 On Jul 5, 2016, at 2:03 PM, Leon Finker  wrote:
 
 This looks similar:
 http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock
 
 But we don't use any other logging framework besides slf4j, log4j2 and 
 log4j2 bridges.
 
> On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
> Hi,
> 
> Using log4j2 runtime args with 2.6.1:
> -DAsyncLogger.RingBufferSize=512
> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
> 
> On CentOS 6.7 and Java 1.8.0_60.
> 
> We noticed that at some point the process has stopped logging to the log 
> file (during startup). When doing 7 thread dumps over 8 minutes the 
> AsyncLogger thread is Runnable, but always in the below stack trace. And 
> all the other threads are TIMED_WAITING to publish new log events 
> RingBuffer.publishEvent. Has anyone seen this before? There was no log 
> entries for at least 25 minutes till we killed the process and restarted 
> it without problems. If AsyncLogger was progressing properly, something 
> would appear in the log file (RollingRandomAccessFile is configured with 
> immediateFlush=true). It's hard to know how big the stack length was in 
> the ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, 
> it's RUNNABLE. Also it doesn't look like there is a way to limit the 
> stack depth for the toExtendedStackTrace?
> 
> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
> java.lang.Thread.State: RUNNABLE
>  at java.lang.Class.forName0(Native Method)
>  at java.lang.Class.forName(Class.java:348)
>  at 
> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
>  at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
>  at 
> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
>  at 
> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
>  at 
> org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
>  at 
> 

Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-06 Thread Leon Finker
Based on google searches, it turns out to be a classic class loading deadlock. 
I found one thread that was inside the class' static initializer: , 
which then eventually ended up logging indirectly and getting blocked on:
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
at 
com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
at 
com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
at com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:450)
at com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:315)
at 
org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageInfo(AsyncLoggerDisruptor.java:203)
..

And the AsyncLogger thread is blocked trying to walk the above thread stack and 
getting deadlocked on the same class that was in .  It's a large 
codebase, so hard to review all the libraries and uses. But now we know what to 
look out for if it happens again. This can happen with async logging. 

Basically one thread would be in RUNNABLE state, but will say "in 
Object.wait()" and another thread would be in .

On 2016-07-05 20:10 (-0400), Remko Popma  wrote: 
> The StackOverflow link may still be relevant if the problem is caused by 
> class loading (by different classloaders?) from different threads. 
> 
> Leon, is there any other thread in the thread dump that is loading a class?
> 
> Sent from my iPhone
> 
> > On 2016/07/06, at 6:31, Ralph Goers  wrote:
> > 
> > The stack overflow reference is using Log4j 1, so that isn’t a match.
> > 
> > Based on the fact that you are in ExtendedThrowablePatternConverter that 
> > would imply that you are logging an exception. But I don’t know why you 
> > would be getting stuck in there. While formatting the exception is slow, it 
> > shouldn’t be that slow.
> > 
> > Ralph
> > 
> >> On Jul 5, 2016, at 2:03 PM, Leon Finker  wrote:
> >> 
> >> This looks similar:
> >> http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock
> >> 
> >> But we don't use any other logging framework besides slf4j, log4j2 and 
> >> log4j2 bridges.
> >> 
> >>> On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
> >>> Hi,
> >>> 
> >>> Using log4j2 runtime args with 2.6.1:
> >>> -DAsyncLogger.RingBufferSize=512
> >>> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
> >>> 
> >>> On CentOS 6.7 and Java 1.8.0_60.
> >>> 
> >>> We noticed that at some point the process has stopped logging to the log 
> >>> file (during startup). When doing 7 thread dumps over 8 minutes the 
> >>> AsyncLogger thread is Runnable, but always in the below stack trace. And 
> >>> all the other threads are TIMED_WAITING to publish new log events 
> >>> RingBuffer.publishEvent. Has anyone seen this before? There was no log 
> >>> entries for at least 25 minutes till we killed the process and restarted 
> >>> it without problems. If AsyncLogger was progressing properly, something 
> >>> would appear in the log file (RollingRandomAccessFile is configured with 
> >>> immediateFlush=true). It's hard to know how big the stack length was in 
> >>> the ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, 
> >>> it's RUNNABLE. Also it doesn't look like there is a way to limit the 
> >>> stack depth for the toExtendedStackTrace?
> >>> 
> >>> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
> >>> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
> >>>  java.lang.Thread.State: RUNNABLE
> >>>   at java.lang.Class.forName0(Native Method)
> >>>   at java.lang.Class.forName(Class.java:348)
> >>>   at 
> >>> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
> >>>   at 
> >>> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
> >>>   at 
> >>> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
> >>>   at 
> >>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
> >>>   at 
> >>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
> >>>   at 
> >>> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
> >>>   at 
> >>> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
> >>>   at 
> >>> org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
> >>>   at 
> >>> org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:288)
> >>>   at 
> >>> org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:194)
> >>>   at 
> >>> 

Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-05 Thread Remko Popma
The StackOverflow link may still be relevant if the problem is caused by class 
loading (by different classloaders?) from different threads. 

Leon, is there any other thread in the thread dump that is loading a class?

Sent from my iPhone

> On 2016/07/06, at 6:31, Ralph Goers  wrote:
> 
> The stack overflow reference is using Log4j 1, so that isn’t a match.
> 
> Based on the fact that you are in ExtendedThrowablePatternConverter that 
> would imply that you are logging an exception. But I don’t know why you would 
> be getting stuck in there. While formatting the exception is slow, it 
> shouldn’t be that slow.
> 
> Ralph
> 
>> On Jul 5, 2016, at 2:03 PM, Leon Finker  wrote:
>> 
>> This looks similar:
>> http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock
>> 
>> But we don't use any other logging framework besides slf4j, log4j2 and 
>> log4j2 bridges.
>> 
>>> On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
>>> Hi,
>>> 
>>> Using log4j2 runtime args with 2.6.1:
>>> -DAsyncLogger.RingBufferSize=512
>>> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
>>> 
>>> On CentOS 6.7 and Java 1.8.0_60.
>>> 
>>> We noticed that at some point the process has stopped logging to the log 
>>> file (during startup). When doing 7 thread dumps over 8 minutes the 
>>> AsyncLogger thread is Runnable, but always in the below stack trace. And 
>>> all the other threads are TIMED_WAITING to publish new log events 
>>> RingBuffer.publishEvent. Has anyone seen this before? There was no log 
>>> entries for at least 25 minutes till we killed the process and restarted it 
>>> without problems. If AsyncLogger was progressing properly, something would 
>>> appear in the log file (RollingRandomAccessFile is configured with 
>>> immediateFlush=true). It's hard to know how big the stack length was in the 
>>> ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, it's 
>>> RUNNABLE. Also it doesn't look like there is a way to limit the stack depth 
>>> for the toExtendedStackTrace?
>>> 
>>> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
>>> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
>>>  java.lang.Thread.State: RUNNABLE
>>>   at java.lang.Class.forName0(Native Method)
>>>   at java.lang.Class.forName(Class.java:348)
>>>   at 
>>> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
>>>   at 
>>> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
>>>   at 
>>> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
>>>   at 
>>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
>>>   at 
>>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
>>>   at 
>>> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
>>>   at 
>>> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
>>>   at 
>>> org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
>>>   at 
>>> org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:288)
>>>   at 
>>> org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:194)
>>>   at 
>>> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)
>>>   at 
>>> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
>>>   at 
>>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)
>>>   at 
>>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)
>>>   at 
>>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)
>>>   at 
>>> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:99)
>>>   at 
>>> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)
>>>   at 
>>> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)
>>>   at 
>>> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)
>>>   at 
>>> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
>>>   at 
>>> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)
>>>   at 
>>> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)
>>>   at 
>>> 

Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-05 Thread Ralph Goers
The stack overflow reference is using Log4j 1, so that isn’t a match.

Based on the fact that you are in ExtendedThrowablePatternConverter that would 
imply that you are logging an exception. But I don’t know why you would be 
getting stuck in there. While formatting the exception is slow, it shouldn’t be 
that slow.

Ralph

> On Jul 5, 2016, at 2:03 PM, Leon Finker  wrote:
> 
> This looks similar:
> http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock
> 
> But we don't use any other logging framework besides slf4j, log4j2 and log4j2 
> bridges.
> 
> On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
>> Hi,
>> 
>> Using log4j2 runtime args with 2.6.1:
>> -DAsyncLogger.RingBufferSize=512
>> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
>> 
>> On CentOS 6.7 and Java 1.8.0_60.
>> 
>> We noticed that at some point the process has stopped logging to the log 
>> file (during startup). When doing 7 thread dumps over 8 minutes the 
>> AsyncLogger thread is Runnable, but always in the below stack trace. And all 
>> the other threads are TIMED_WAITING to publish new log events 
>> RingBuffer.publishEvent. Has anyone seen this before? There was no log 
>> entries for at least 25 minutes till we killed the process and restarted it 
>> without problems. If AsyncLogger was progressing properly, something would 
>> appear in the log file (RollingRandomAccessFile is configured with 
>> immediateFlush=true). It's hard to know how big the stack length was in the 
>> ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, it's 
>> RUNNABLE. Also it doesn't look like there is a way to limit the stack depth 
>> for the toExtendedStackTrace?
>> 
>> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
>> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
>>   java.lang.Thread.State: RUNNABLE
>>at java.lang.Class.forName0(Native Method)
>>at java.lang.Class.forName(Class.java:348)
>>at 
>> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
>>at 
>> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
>>at 
>> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
>>at 
>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
>>at 
>> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
>>at 
>> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
>>at 
>> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
>>at 
>> org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
>>at 
>> org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:288)
>>at 
>> org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:194)
>>at 
>> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)
>>at 
>> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
>>at 
>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)
>>at 
>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)
>>at 
>> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)
>>at 
>> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:99)
>>at 
>> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)
>>at 
>> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)
>>at 
>> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)
>>at 
>> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:381)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:376)
>>at 
>> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)
>>at 
>> 

Re: Logging stopped, but Log4j2-AsyncLogger is RUNNABLE and maybe stuck in ThrowableProxy.toExtendedStackTrace

2016-07-05 Thread Leon Finker
This looks similar:
http://stackoverflow.com/questions/15543521/mixed-usage-of-log4j-and-commons-logging-causes-a-class-loading-deadlock

But we don't use any other logging framework besides slf4j, log4j2 and log4j2 
bridges.

On 2016-07-05 15:14 (-0400), "Leon Finker" wrote: 
> Hi,
> 
> Using log4j2 runtime args with 2.6.1:
> -DAsyncLogger.RingBufferSize=512
> -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
> 
> On CentOS 6.7 and Java 1.8.0_60.
> 
> We noticed that at some point the process has stopped logging to the log file 
> (during startup). When doing 7 thread dumps over 8 minutes the AsyncLogger 
> thread is Runnable, but always in the below stack trace. And all the other 
> threads are TIMED_WAITING to publish new log events RingBuffer.publishEvent. 
> Has anyone seen this before? There was no log entries for at least 25 minutes 
> till we killed the process and restarted it without problems. If AsyncLogger 
> was progressing properly, something would appear in the log file 
> (RollingRandomAccessFile is configured with immediateFlush=true). It's hard 
> to know how big the stack length was in the 
> ThrowableProxy.toExtendedStackTrace. But the threas is not BLOCKED, it's 
> RUNNABLE. Also it doesn't look like there is a way to limit the stack depth 
> for the toExtendedStackTrace?
> 
> "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 
> tid=0x7ff870c7b000 nid=0x79f3 in Object.wait() [0x7ff839142000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.logging.log4j.core.util.Loader.initializeClass(Loader.java:241)
> at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:487)
> at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617)
> at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:135)
> at 
> org.apache.logging.log4j.core.impl.ThrowableProxy.(ThrowableProxy.java:117)
> at 
> org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:316)
> at 
> org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:61)
> at 
> org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:288)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:194)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:180)
> at 
> org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:120)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:113)
> at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:104)
> at 
> org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:99)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)
> at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:381)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:376)
> at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)
> at 
> org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:79)
> at 
> org.apache.logging.log4j.core.async.AsyncLogger.actualAsyncLog(AsyncLogger.java:310)
> at 
> org.apache.logging.log4j.core.async.RingBufferLogEvent.execute(RingBufferLogEvent.java:149)
> at 
> org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(RingBufferLogEventHandler.java:45)
> at 
>