[ 
https://issues.apache.org/jira/browse/LOG4J2-1441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15986327#comment-15986327
 ] 

Nikolai commented on LOG4J2-1441:
---------------------------------

I can confirm that such problem is still actual in 2.8.2.
When I use async logging (with 
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector)
 then with immediateFlush="false" some last messages are never logged in 
rare-updated file. When I removed <immediateFlush="false">, the problem was 
gone.
Please, either fix this, either change your misleading documentation in 
https://logging.apache.org/log4j/2.x/manual/async.html that says:
<!-- Async Loggers will auto-flush in batches, so switch off immediateFlush. -->

> Asynchronous logging + immediateFlush=false appenders : potential data 
> retention on end of batch event
> ------------------------------------------------------------------------------------------------------
>
>                 Key: LOG4J2-1441
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-1441
>             Project: Log4j 2
>          Issue Type: Bug
>    Affects Versions: 2.6
>            Reporter: Anthony Maire
>         Attachments: example_LOG4J2-1441.zip, flush_fail_1441.zip
>
>
> When using asynchronous logging, suggestion is made to use 
> immediateFlush=false for file-based appenders and rely on the end of batch 
> detection to perform the flush, which is supposed to happen in a timely manner
> As per AsyncLogger JavaDoc
> {quote}
> For best performance, use AsyncLogger with the RandomAccessFileAppender or 
> RollingRandomAccessFileAppender, with immediateFlush=false
> {quote}
> As per File/RandomAccessFile appenders doc on the site
> {quote}
> Flushing after every write is only useful when using this appender with 
> synchronous loggers. Asynchronous loggers and appenders will automatically 
> flush at the end of a batch of events, even if immediateFlush is set to 
> false. This also guarantees the data is written to disk but is more efficient.
> {quote}
> However if there are multiple appenders for the same asynchronous mechanism 
> (either multiple appenderRef for an AsyncAppender or multiple AsyncLoggers 
> since they share the same disruptor instance), the last event of the batch 
> will only flush the appenders that are actually logging it, and data on other 
> appenders won't be flushed.
> I made a small example using 2 asynchronous loggers, same kind of issue 
> should occur with several appenders linked to the same AsyncAppender when the 
> last event is filtered out because of it's level or a custom filter.
> - 10 second after the start, all events are processed but the main thread 
> will not exit because of an infinite loop
> - 2 events where processed by root logger and correctly logged in the console 
> appender
> - only one event has been flushed to the file (the file appender is attached 
> to root logger too)
> - I made a heap dump on the JVM, it shows that the RandomAccessFileManager 
> internal byte buffer has pending data ( byteBuffer.position=61)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to