[
https://issues.apache.org/jira/browse/LOG4J2-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Remko Popma resolved LOG4J2-1080.
---------------------------------
Resolution: Fixed
Fixed in master.
I introduced a new interface {{AsyncEventRouter}}. Implementations decide
whether to discard events, enqueue them for async logging or log them
synchronously. Async Loggers and AsyncAppender obtain an AsyncEventRouter
implementation from {{AsyncEventRouterFactory}} and delegate to this
implementation where to send the log event.
The default implementation DefaultAsyncEventRouter encapsulates the logic for
LOG4J2-471: prevent deadlock when the queue is full and the object that is
being logged performs logging itself from its toString() method.
DiscardingAsyncEventRouter implements the requirements for this ticket
LOG4J2-1080 and drops events equal to or below level INFO when the queue is 80%
full or more. Both threshold level and queue usage threshold are configurable.
This implementation can be enabled by setting system property
{{log4j2.AsyncEventRouter}} to value {{Discard}}.
Users may also specify a custom {{AsyncEventRouter}} by specifying the fully
qualified class name of a class implementing the
{{org.apache.logging.log4j.core.async.AsyncEventRouter}} interface for system
property {{log4j2.AsyncEventRouter}}.
[~tezra] Please verify and close.
> Drop events when the RingBuffer is full
> ---------------------------------------
>
> Key: LOG4J2-1080
> URL: https://issues.apache.org/jira/browse/LOG4J2-1080
> Project: Log4j 2
> Issue Type: New Feature
> Reporter: tzachi
> Assignee: Remko Popma
> Fix For: 2.5.1
>
> Attachments: AsyncLogger.dropEvents.patch
>
>
> I am running into performance issue with an appender, in a certain scenario
> (attached at the bottom), that causes RingBuffer to reach its full capacity.
> When that happens I can see that my app throughput drops significantly.
> I think it will be really useful to be able to configure the RingBuffer
> handler to be able to drop events whenever the buffer reaches its capacity,
> instead of what seems currently as blocking, as I don't want the logging to
> affect the main application.
> ---------------------------------------------------------------------
> Here is the scenario that led me to this request:
> I am currently testing the log4j-flume-ng appender and running into some
> issues. It seems like whenever log4j appender fails to log an event it causes
> the disruptor ring buffer to get full which slows down the whole system.
> My setup looks more or less like that:
> process 1: Java app which uses log4j2 (with flume-ng’s Avro appender)
> process 2: local flume-ng which gets the logs on using an Avro source and
> process them
> Here are my findings:
> When Flume (process 2) is up and running, everything actually looks really
> good. The ring buffer capacity is almost always full and there are no
> performance issues. The problem starts when I shut down process 2 - I am
> trying to simulate a case in which this process crashes, as I do not want it
> to effect process 1. As soon as I shut down flume I start getting exceptions
> produced by log4j telling me they cannot append the log - so far it makes
> sense. The thing is, that at the same time I can see that the ring buffer
> starts to fill up. As long as it’s not totally full process’s 1 throughput
> stays the same. The problem gets serious as soon as the buffer reaches full
> capacity. When that happens the throughput drops in 80% and it does not seem
> to recover from this state. But, as soon as I restart process 2, things get
> back to normal pretty quick - the buffer gets emptied, and the throughput
> climbs back to what it was before. I assume that from some reason a fail to
> append makes the RingBuffer consumer thread significantly slower.
> Besides checking why the flume appender preform slower when an exception is
> thrown, I wish I could just discard the log events when the buffer gets full.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]