Robert Veitch created LOG4J2-1183:
-------------------------------------

             Summary: Shutdown hook does not wait for GzCompressAction to 
complete with async logger
                 Key: LOG4J2-1183
                 URL: https://issues.apache.org/jira/browse/LOG4J2-1183
             Project: Log4j 2
          Issue Type: Bug
    Affects Versions: 2.4.1, 2.2
         Environment: OS X, Ubuntu
            Reporter: Robert Veitch


For my project, I have an appender configured with a fairly large rollover size 
(500MB). I originally had a smaller rollover size (50MB, then 100MB) and 
noticed that after increasing the rollover size, I was seeing partially 
compressed files after my application exited. Below is an example class to 
reproduce the behavior:

package com.test;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TestLargeRollover {
  private static Logger _logger = 
LoggerFactory.getLogger(TestLargeRollover.class);

  public static void main(final String[] args) {
    final String msg = new String(new byte[1048576]);
    // log enough junk data to cause a rollover event
    for (int i = 0; i < 750; i++) {
      _logger.info(msg);
    }
  }
}

Using the following log4j2.xml:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
  <Appenders>
    <RollingRandomAccessFile name="TestLog"
    immediateFlush="false" append="true" fileName="test.log" 
filePattern="test.log-%i.gz">
      <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p [%t] %l - %m%n" 
/>
      <SizeBasedTriggeringPolicy size="500MB" />
      <DefaultRolloverStrategy max="20" />
    </RollingRandomAccessFile>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="TestLog" />
    </Root>
  </Loggers>
</Configuration>

And I'm calling the test class with the following system property to ensure the 
use of async loggers:
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector

If I run the above test script, I see the following output after the script 
exits:

$ ls -al
-rw-r--r--   1 rveitch  rveitch Users  262154250 Oct 28 13:50 test.log
-rw-r--r--   1 rveitch  rveitch  524308500 Oct 28 13:50 test.log-1
-rw-r--r--   1 rveitch  rveitch     122120 Oct 28 13:50 test.log-1.gz

Currently, I implemented a workaround by using a custom shutdown hook that 
polls the file system until the log is completely persisted, but it's ugly and 
I'd rather not do it. Something like:

public static void waitForGzCompressActionCompletion() {
    final Logger loggerImpl = (Logger) LogManager.getLogger("TestLogger");
    final RollingRandomAccessFileAppender appender =
        (RollingRandomAccessFileAppender) 
loggerImpl.getAppenders().get("TestLog");

    final File logDirectory = new File(appender.getFileName()).getParentFile();
    final FilenameFilter incompleteGzFilenameFilter = new 
IncompleteGzCompressAction();

    while (logDirectory.list(incompleteGzFilenameFilter).length > 0) {
      _logger.debug("Polling found incomplete gz files, will wait");
      Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS);
    }
  }

public static class IncompleteGzCompressAction implements FilenameFilter {
    private final Pattern incompletePattern;

    public IncompleteGzCompressAction() {
      this.incompletePattern = Pattern.compile("test.log-\\d+");
    }

    @Override
    public boolean accept(final File dir, final String name) {
      return this.incompletePattern.matcher(name).matches();
    }
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to