[
https://issues.apache.org/jira/browse/LOG4J2-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17877231#comment-17877231
]
Yanming Zhou commented on LOG4J2-3687:
--------------------------------------
Superseded by https://github.com/apache/logging-log4j2/issues/2893
> Appender may caused deadlock on AsyncLoggerDisruptor
> ----------------------------------------------------
>
> Key: LOG4J2-3687
> URL: https://issues.apache.org/jira/browse/LOG4J2-3687
> Project: Log4j 2
> Issue Type: Bug
> Components: Appenders
> Affects Versions: 2.20.0
> Reporter: Yanming Zhou
> Priority: Blocker
>
> Log operation is blocked when the RingBuffer is full, then waiting for
> appender to consume LogEvent's from the RingBuffer, but the appender itself
> may operate logging before the consumption to trigger deadlock.
>
> Here is my actual case in production, I'm using kafka appender, the kafka
> client logging caused deadlock, see
> [TransactionManager.java|https://github.com/apache/kafka/blob/f9e30289d96b6c096c2bad44d714200e4c34cc04/clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java#L503]
> {code:java}
> "kafka-producer-network-thread | producer-1" #44 daemon prio=5 os_prio=0
> tid=0x00007f468310c800 nid=0x256255 waiting for monitor entry
> [0x00007f467ad6c000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at
> org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageWhenQueueFull(AsyncLoggerDisruptor.java:246)
> - waiting to lock <0x0000000701070490> (a java.lang.Object)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.handleRingBufferFull(AsyncLogger.java:246)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.publish(AsyncLogger.java:230)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.logWithThreadLocalTranslator(AsyncLogger.java:202)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.access$100(AsyncLogger.java:67)
> at
> org.apache.logging.log4j.core.async.AsyncLogger$1.log(AsyncLogger.java:157)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.logMessage(AsyncLogger.java:130)
> at org.apache.logging.slf4j.Log4jLogger.log(Log4jLogger.java:378)
> at
> org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.writeLog(LogContext.java:434)
> at
> org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.info(LogContext.java:387)
> at
> org.apache.kafka.clients.producer.internals.TransactionManager.setProducerIdAndEpoch(TransactionManager.java:505)
> at
> org.apache.kafka.clients.producer.internals.TransactionManager.bumpIdempotentProducerEpoch(TransactionManager.java:542)
> at
> org.apache.kafka.clients.producer.internals.TransactionManager.bumpIdempotentEpochAndResetIdIfNeeded(TransactionManager.java:559)
> - locked <0x00000007016c0898> (a
> org.apache.kafka.clients.producer.internals.TransactionManager)
> at
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:330)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:247)
> at java.lang.Thread.run(Thread.java:748) {code}
> {code:java}
> "http-nio-8080-exec-273" #3682 daemon prio=5 os_prio=0 tid=0x00007f465b422800
> nid=0x2908d3 runnable [0x00007f4648b48000] java.lang.Thread.State:
> TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) at
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) at
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
> at
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
> at com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:465)
> at com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:331) at
> org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageWhenQueueFull(AsyncLoggerDisruptor.java:247)
> - locked <0x0000000701070490> (a java.lang.Object) at
> org.apache.logging.log4j.core.async.AsyncLogger.handleRingBufferFull(AsyncLogger.java:246)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.publish(AsyncLogger.java:230)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.logWithThreadLocalTranslator(AsyncLogger.java:225)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.access$000(AsyncLogger.java:67)
> at
> org.apache.logging.log4j.core.async.AsyncLogger$1.log(AsyncLogger.java:152)
> at
> org.apache.logging.log4j.core.async.AsyncLogger.log(AsyncLogger.java:136)
> at
> org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2168)
> at
> org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2122)
> at
> org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2105)
> at
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1997)
> at
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1862)
> at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:185)
> {code}
> My proposal is discarding LogEvent from appenders if the RingBuffer is full.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)