[
https://issues.apache.org/jira/browse/LOG4J2-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741924#comment-13741924
]
Remko Popma edited comment on LOG4J2-354 at 8/16/13 5:15 AM:
-------------------------------------------------------------
(quick heads-up: FastFileAppender will be renamed to RandomAccessFileAppender
in beta-9, XML configuration elements are also renamed)
I am guessing that this has to do with the nature of the RandomAccessFile that
underlies the FastFileAppender. A RandomAccessFile acts like a large array of
bytes stored in the file system, with a file pointer to the current index at
which one can read or write. I am guessing that the appender keeps writing to
the same file pointer during logrotate's operation.
You may be able to work around this by using the *copytruncate* option for
logrotate (http://linux.die.net/man/8/logrotate).
However, the man page warns about the copytruncate option that "there is a very
small time slice between copying the file and truncating it, so some logging
data might be lost".
For this reason I am inclined to say that logrotate should not be used with
FastFileAppender/RandomAccessFileAppender. Instead, I would recommend you use
FastRollingFileAppender/RollingRandomAccessFileAppender. The Rolling appenders
accomplish the same thing as logrotate, but since the files are moved by the
logging subsystem itself, log events are never lost.
By the way, I worry that you may also lose log events when combining logrotate
with the FileAppender and would recommend you use RollingFileAppender and do
not use logrotate.
was (Author: [email protected]):
(quick heads-up: FastFileAppender will be renamed to
RandomAccessFileAppender in beta-9, XML configuration elements are also renamed)
I am guessing that this has to do with the nature of the RandomAccessFile that
underlies the FastFileAppender. A RandomAccessFile acts like a large array of
bytes stored in the file system, with a file pointer to the current index at
which one can read or write. I am guessing that the appender keeps writing to
the same file pointer during logrotate's operation.
You may be able to work around this by using the copytruncate option for
logrotate (http://linux.die.net/man/8/logrotate).
However, the man page warns about the copytruncate option that "there is a very
small time slice between copying the file and truncating it, so some logging
data might be lost".
For this reason I am inclined to say that logrotate should not be used with
FastFileAppender/RandomAccessFileAppender. Instead, I would recommend you use
FastRollingFileAppender/RollingRandomAccessFileAppender. The Rolling appenders
accomplish the same thing as logrotate, but since the files are moved by the
logging subsystem itself, log events are never lost.
By the way, I worry that you may also lose log events when combining logrotate
with the FileAppender and would recommend you use RollingFileAppender and do
not use logrotate.
> log4j2 + FastFileAppender + Tomcat logrotate problem
> ----------------------------------------------------
>
> Key: LOG4J2-354
> URL: https://issues.apache.org/jira/browse/LOG4J2-354
> Project: Log4j 2
> Issue Type: Bug
> Affects Versions: 2.0-beta8
> Environment: log4j2-beta8
> disruptor 3.1.1
> tomcat 7 with Java 1.6
> We use the SLF4J library as well.
> Reporter: Remko Popma
>
> (from log4j-user mailing list)
> Kamil Mroczek wrote:
> Hello,
> We decided to try out log4j2-beta8 to see if we could improve our logging
> performance. We were testing with the disruptor 3.1.1. library to make all
> our appenders async.
> {{-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector}}
> We are running tomcat 7 with Java 1.6. We use the SLF4J library as well.
> The appender that we were using in this case was the Fast File Appender with
> a definition like:
> {code}
> <FastFile name="RequestLog" fileName="requests.log"
> immediateFlush="false" append="true">
> <PatternLayout>
> <pattern>%m%n</pattern>
> </PatternLayout>
> </FastFile>
> {code}
> And logger was..
> {code}
> <logger name="a.namespace" level="info" additivity="false">
> <appender-ref ref="RequestLog"/>
> </logger>
> {code}
> So the system was designed to allow log4j to do the logging and then have
> logrotate rotate the log files from the host to an external destination.
> We rotate the logs every 5 minutes with these params (with LZO compression).
> compress
> compresscmd /bin/lzop
> compressoptions "-U"
> compressext .lzo
> What we were seeing was that after a log rotation happened the new file would
> start with a massive chunk of binary data at the start. Many times on the
> order of 100-200MB. This would turn the logs from being on the order of
> 50-100MB to 200-350MB.
> My guess was that it had something to do with the byte buffer flushing
> mid-rotate since these chunks always come at the start of the file. But I
> also saw LOG4J2-295 (Fast(Rolling)FileAppender now correctly handles messages
> exceeding the buffer size. ) which was fixed in beta8 which my discredit that
> idea.
> We were able to fix the issue by using the regular FileAppender like this:
> {code}
> <File name="RequestLog" fileName="requests.log" immediateFlush="true"
> append="true" bufferedIO="false">
> <PatternLayout>
> <pattern>%m%n</pattern>
> </PatternLayout>
> </File>
> {code}
> I can't remember for certain, but I am pretty sure that even if we had
> bufferedIO="true" on the FileAppender everything worked okay as well.
> We could reproduce it pretty consistently. I wanted to post to the group to
> see if anyone has seen anything like this before. Any ideas on what the
> issue could be?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]