[ https://issues.apache.org/jira/browse/LOG4J2-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364613#comment-15364613 ]
Leon Finker commented on LOG4J2-1457: ------------------------------------- Yes it looks like many of those threads are locked on the class loader in GemFirexMapEntryFactory.createValue. Because they all say java.lang.Thread.State: RUNNABLE, but in Object.wait(). And the code in that method creates the class that has the <clinit>: at x.core.services.x.GemFirexMapEntryFactory.createValue(GemFirexMapEntryFactory.java:22) {noformat} @Override public MapValue createValue(byte[] value) { if (value == null) { throw new IllegalArgumentException("value is null"); } return new xMessageWrapper(value); } {noformat} And the xMessageWrapper has the static initializer that is deadlocked on waiting for the disruptor available slot: at tradingscreen.core.services.tube.xMessageWrapper.<clinit>(xMessageWrapper.java:31) This one caused the exception to be logged at warning. But the log event didn't make it to the disruptor buffer yet from the callstack: {noformat} java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) at com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) at com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) at com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:450) at com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:315) at org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageInfo(AsyncLoggerDisruptor.java:203) at org.apache.logging.log4j.core.async.AsyncLogger.handleRingBufferFull(AsyncLogger.java:170) at org.apache.logging.log4j.core.async.AsyncLogger.publish(AsyncLogger.java:162) at org.apache.logging.log4j.core.async.AsyncLogger.logWithThreadLocalTranslator(AsyncLogger.java:157) at org.apache.logging.log4j.core.async.AsyncLogger.logMessage(AsyncLogger.java:127) at org.apache.logging.log4j.spi.ExtendedLoggerWrapper.logMessage(ExtendedLoggerWrapper.java:217) at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1827) at org.apache.logging.log4j.spi.AbstractLogger.warn(AbstractLogger.java:2505) {noformat} So still not sure why it deadlocks. > Class loader deadlock when using async logging > ---------------------------------------------- > > Key: LOG4J2-1457 > URL: https://issues.apache.org/jira/browse/LOG4J2-1457 > Project: Log4j 2 > Issue Type: Bug > Affects Versions: 2.6.1 > Environment: On CentOS 6.7 and Java 1.8.0_60. > Reporter: Leon Finker > Priority: Critical > Attachments: threaddump.txt > > > We've encountered a class loading deadlock. Please review attached thread > dump. Is it possible to have an option of pre-initializing the exception's > thread stack on the caller's thread? It's hard to predict what libraries are > doing in their classes' static initializers and may eventually end up logging > and causing deadlock. > In the attached thread dump here are the threads of interest: > "Log4j2-AsyncLogger[AsyncContext@18b4aac2]1" #16 daemon prio=5 os_prio=0 > tid=0x00007ff870c7b000 nid=0x79f3 in Object.wait() [0x00007ff839142000] > java.lang.Thread.State: RUNNABLE > at java.lang.Class.forName0(Native Method) > ... > and > "1A03340:Company:japan" #568 prio=5 os_prio=0 tid=0x00007ff871677000 > nid=0x725 runnable [0x00007ff74bd27000] > ...<clinit>... -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org