[
https://issues.apache.org/jira/browse/LOG4J2-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15054316#comment-15054316
]
Remko Popma commented on LOG4J2-1221:
-------------------------------------
FYI, the default ring buffer size is 262144 (256*1024). I recommend you don't
specify a size at all, so you get this default.
The problem I have with this issue is that a full ring buffer usually does not
indicate a problem. It simply means that the producers are logging faster than
the appender can keep up with. The Disruptor's design (fixed buffer size) is to
make the producers wait for a free slot to become available, so that the
appender can catch up.
I understand that the deadlock is a serious issue, but since the cause seems to
be with the Disruptor (on Solaris), I am still not 100% convinced that we
should have a special fix for it in log4j. At the very least I want to keep the
normal behaviour as I described above. I need to think about this more.
Have you asked Mike Barker (who maintains the Disruptor) if he can add a timed
wait in the BlockingWaitStrategy?
> Dead lock observed in BlockingWaitStrategy in Log 4J
> ----------------------------------------------------
>
> Key: LOG4J2-1221
> URL: https://issues.apache.org/jira/browse/LOG4J2-1221
> Project: Log4j 2
> Issue Type: Bug
> Components: Core
> Affects Versions: 2.2
> Environment: log4J Version : 2.2 Disruptor Version : 3.3.2
> Ring Buffer Size : 128
> OS Version :
> cat /etc/release
> Oracle Solaris 11.2 X86
> Java Version
> java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) Server VM (build 24.45-b08, mixed mode)
> Reporter: Sampath Kumar
> Priority: Critical
> Labels: patch
>
> We have seen this behavior in during high load. Where Logging Got Stropped
> and Application Went to not responsive state.
> log4J Version : 2.2 Disruptor Version : 3.3.2
> Ring Buffer Size : 128
> Producer(Multiples Threads) and Consumer Threads(Single Thread As per Log 4J
> Configuration) Started Waiting on each other.
> Here is the one of the Trace from Thread Dump:
> Producer :
> "[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default
> (self-tuning)'" TIMED_WAITING
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:349)
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
> com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
> com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:444)
> com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:256)
> org.apache.logging.log4j.core.async.AsyncLogger.logMessage(AsyncLogger.java:285)
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:722)
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:693)
> org.apache.logging.log4j.jcl.Log4jLog.debug(Log4jLog.java:81)
> Consumer Thread :
> "AsyncLogger-1" waiting for lock
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5d972983
> WAITING
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
> com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55)
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:123)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:744)
> Is this is known issue which got already fixed in recent build ?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]