[
https://issues.apache.org/jira/browse/LOG4J2-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Volkan Yazici updated LOG4J2-3188:
----------------------------------
Description:
When multiple {{LoggerContext}}'s get initialized concurrently, they affect
each other causing severe {{LogEvent}} drop and incorrect delivery. Consider
the following test:
{code:java}
@Execution(ExecutionMode.SAME_THREAD)
class ParallelRunTest {
@ParameterizedTest
@ValueSource(chars = {'A', 'B', 'C', 'D'})
void test(final char letter) {
// Create the configuration builder.
final ConfigurationBuilder<?> configBuilder =
ConfigurationBuilderFactory
.newConfigurationBuilder()
.setStatusLevel(Level.ERROR)
.setConfigurationName("Letter-" + letter);
// Create the configuration.
final String appenderName = "File";
final String appenderFilepath = Paths
.get(
System.getProperty("java.io.tmpdir"),
"ParallelRunTest-" + letter + ".log")
.toAbsolutePath()
.toString();
final Configuration config = configBuilder
.add(configBuilder
.newAppender(appenderName, "File")
.addAttribute("fileName", appenderFilepath)
.addAttribute("append", false)
.addAttribute("immediateFlush", false)
.addAttribute("ignoreExceptions", false)
.add(configBuilder
.newLayout("PatternLayout")
.addAttribute("pattern", "%m\n")))
.add(configBuilder
.newRootLogger(Level.ALL)
.add(configBuilder.newAppenderRef(appenderName)))
.build(false);
// Initialize the configuration.
try (final LoggerContext loggerContext =
Configurator.initialize(config)) {
final Logger logger =
loggerContext.getLogger(ParallelRunTest.class);
// Write logs.
final String message = Strings.repeat(String.valueOf(letter), 999);
for (int i = 0; i < 1_000; i++) {
logger.info(message);
}
}
// Verify the file content.
final long appenderFileLength = new File(appenderFilepath).length();
assertThat(appenderFileLength).isEqualTo(1_000_000L);
}
}
{code}
Above test, given a {{letter}}, creates a {{ParallelRunTest-<letter>.log}}
file, and writes 999 times that letter to each line (ending with a line-feed
character) for 1,000 lines. The eventual file is expected to be of size
{{lineLength * lineCount = 1,000 * 1,000 = 1e6}}. These assumptions indeed hold
after a run:
{code}
wc -l /tmp/ParallelRunTest-*
1000 /tmp/ParallelRunTest-A.log
1000 /tmp/ParallelRunTest-B.log
1000 /tmp/ParallelRunTest-C.log
1000 /tmp/ParallelRunTest-D.log
4000 total
for log_file in /tmp/ParallelRunTest-*; do
echo "file: $log_file"
echo "line lengths:"
awk "{print length}" $log_file | sort | uniq
echo "distinct lines:"
sort $log_file | uniq
echo
done
file: /tmp/ParallelRunTest-A.log
line lengths:
999
distinct lines:
AAA...A
file: /tmp/ParallelRunTest-B.log
line lengths:
999
distinct lines:
BBB...B
file: /tmp/ParallelRunTest-C.log
line lengths:
999
distinct lines:
CCC...C
file: /tmp/ParallelRunTest-D.log
line lengths:
999
distinct lines:
DDD...D
{code}
Though when {{ExecutionMode.SAME_THREAD}} is replaced with
{{ExecutionMode.CONCURRENT}} tests fail due to unexpected
{{appenderFileLength}}:
{code}
wc -l /tmp/ParallelRunTest-*
96 /tmp/ParallelRunTest-A.log
1 /tmp/ParallelRunTest-B.log
96 /tmp/ParallelRunTest-C.log
3577 /tmp/ParallelRunTest-D.log
3770 total
for log_file in /tmp/ParallelRunTest-*; do
echo "file: $log_file"
echo "line lengths:"
awk "{print length}" $log_file | sort | uniq
echo "distinct lines:"
sort $log_file | uniq
echo
done
file: /tmp/ParallelRunTest-A.log
line lengths:
999
distinct lines:
AAA...A
BBB...B
CCC...C
file: /tmp/ParallelRunTest-B.log
line lengths:
999
distinct lines:
BBB...B
file: /tmp/ParallelRunTest-C.log
line lengths:
999
distinct lines:
BBB...B
CCC...C
file: /tmp/ParallelRunTest-D.log
line lengths:
999
distinct lines:
AAA...A
BBB...B
CCC...C
DDD...D
{code}
Note that {{AAA...A}}, {{BBB...B}}, etc. snippets shared above are truncated
for display purposes, otherwise each line has 999 _unscrambled_ characters.
Given 1) the total number of logged lines is less than 4000, 2) letters are
scattered between files, and 3) lines are not scrambled, I have the feeling
that produced {{LogEvent}}s either got lost or sent to wrong appenders.
was:
When multiple {{LoggerContext}}s get initialized concurrently, they affect each
other causing severe {{LogEvent}} drop and incorrect delivery. Consider the
following test:
{code:java}
@Execution(ExecutionMode.SAME_THREAD)
class ParallelRunTest {
@ParameterizedTest
@ValueSource(chars = {'A', 'B', 'C', 'D'})
void test(final char letter) {
// Create the configuration builder.
final ConfigurationBuilder<?> configBuilder =
ConfigurationBuilderFactory
.newConfigurationBuilder()
.setStatusLevel(Level.ERROR)
.setConfigurationName("Letter-" + letter);
// Create the configuration.
final String appenderName = "File";
final String appenderFilepath = Paths
.get(
System.getProperty("java.io.tmpdir"),
"ParallelRunTest-" + letter + ".log")
.toAbsolutePath()
.toString();
final Configuration config = configBuilder
.add(configBuilder
.newAppender(appenderName, "File")
.addAttribute("fileName", appenderFilepath)
.addAttribute("append", false)
.addAttribute("immediateFlush", false)
.addAttribute("ignoreExceptions", false)
.add(configBuilder
.newLayout("PatternLayout")
.addAttribute("pattern", "%m\n")))
.add(configBuilder
.newRootLogger(Level.ALL)
.add(configBuilder.newAppenderRef(appenderName)))
.build(false);
// Initialize the configuration.
try (final LoggerContext loggerContext =
Configurator.initialize(config)) {
final Logger logger =
loggerContext.getLogger(ParallelRunTest.class);
// Write logs.
final String message = Strings.repeat(String.valueOf(letter), 999);
for (int i = 0; i < 1_000; i++) {
logger.info(message);
}
}
// Verify the file content.
final long appenderFileLength = new File(appenderFilepath).length();
assertThat(appenderFileLength).isEqualTo(1_000_000L);
}
}
{code}
Above test, given a {{letter}}, creates a {{ParallelRunTest-<letter>.log}}
file, and writes 999 times that letter to each line (ending with a line-feed
character) for 1,000 lines. The eventual file is expected to be of size
{{lineLength * lineCount = 1,000 * 1,000 = 1e6}}. These assumptions indeed hold
after a run:
{code}
wc -l /tmp/ParallelRunTest-*
1000 /tmp/ParallelRunTest-A.log
1000 /tmp/ParallelRunTest-B.log
1000 /tmp/ParallelRunTest-C.log
1000 /tmp/ParallelRunTest-D.log
4000 total
for log_file in /tmp/ParallelRunTest-*; do
echo "file: $log_file"
echo "line lengths:"
awk "{print length}" $log_file | sort | uniq
echo "distinct lines:"
sort $log_file | uniq
echo
done
file: /tmp/ParallelRunTest-A.log
line lengths:
999
distinct lines:
AAA...A
file: /tmp/ParallelRunTest-B.log
line lengths:
999
distinct lines:
BBB...B
file: /tmp/ParallelRunTest-C.log
line lengths:
999
distinct lines:
CCC...C
file: /tmp/ParallelRunTest-D.log
line lengths:
999
distinct lines:
DDD...D
{code}
Though when {{ExecutionMode.SAME_THREAD}} is replaced with
{{ExecutionMode.CONCURRENT}} tests fail due to unexpected
{{appenderFileLength}}:
{code}
wc -l /tmp/ParallelRunTest-*
96 /tmp/ParallelRunTest-A.log
1 /tmp/ParallelRunTest-B.log
96 /tmp/ParallelRunTest-C.log
3577 /tmp/ParallelRunTest-D.log
3770 total
for log_file in /tmp/ParallelRunTest-*; do
echo "file: $log_file"
echo "line lengths:"
awk "{print length}" $log_file | sort | uniq
echo "distinct lines:"
sort $log_file | uniq
echo
done
file: /tmp/ParallelRunTest-A.log
line lengths:
999
distinct lines:
AAA...A
BBB...B
CCC...C
file: /tmp/ParallelRunTest-B.log
line lengths:
999
distinct lines:
BBB...B
file: /tmp/ParallelRunTest-C.log
line lengths:
999
distinct lines:
BBB...B
CCC...C
file: /tmp/ParallelRunTest-D.log
line lengths:
999
distinct lines:
AAA...A
BBB...B
CCC...C
DDD...D
{code}
Note that {{AAA...A}}, {{BBB...B}}, etc. snippets shared above are truncated
for display purposes, otherwise each line has 999 _unscrambled_ characters.
Given 1) the total number of logged lines is less than 4000, 2) letters are
scattered between files, and 3) lines are not scrambled, I have the feeling
that produced {{LogEvent}}s either got lost or sent to wrong appenders.
> Concurrent individual LoggerContext initializations cause severe LogEvent
> drop or incorrect delivery
> ----------------------------------------------------------------------------------------------------
>
> Key: LOG4J2-3188
> URL: https://issues.apache.org/jira/browse/LOG4J2-3188
> Project: Log4j 2
> Issue Type: Bug
> Components: Core
> Reporter: Volkan Yazici
> Priority: Major
>
> When multiple {{LoggerContext}}'s get initialized concurrently, they affect
> each other causing severe {{LogEvent}} drop and incorrect delivery. Consider
> the following test:
> {code:java}
> @Execution(ExecutionMode.SAME_THREAD)
> class ParallelRunTest {
> @ParameterizedTest
> @ValueSource(chars = {'A', 'B', 'C', 'D'})
> void test(final char letter) {
> // Create the configuration builder.
> final ConfigurationBuilder<?> configBuilder =
> ConfigurationBuilderFactory
> .newConfigurationBuilder()
> .setStatusLevel(Level.ERROR)
> .setConfigurationName("Letter-" + letter);
> // Create the configuration.
> final String appenderName = "File";
> final String appenderFilepath = Paths
> .get(
> System.getProperty("java.io.tmpdir"),
> "ParallelRunTest-" + letter + ".log")
> .toAbsolutePath()
> .toString();
> final Configuration config = configBuilder
> .add(configBuilder
> .newAppender(appenderName, "File")
> .addAttribute("fileName", appenderFilepath)
> .addAttribute("append", false)
> .addAttribute("immediateFlush", false)
> .addAttribute("ignoreExceptions", false)
> .add(configBuilder
> .newLayout("PatternLayout")
> .addAttribute("pattern", "%m\n")))
> .add(configBuilder
> .newRootLogger(Level.ALL)
> .add(configBuilder.newAppenderRef(appenderName)))
> .build(false);
> // Initialize the configuration.
> try (final LoggerContext loggerContext =
> Configurator.initialize(config)) {
> final Logger logger =
> loggerContext.getLogger(ParallelRunTest.class);
> // Write logs.
> final String message = Strings.repeat(String.valueOf(letter),
> 999);
> for (int i = 0; i < 1_000; i++) {
> logger.info(message);
> }
> }
> // Verify the file content.
> final long appenderFileLength = new File(appenderFilepath).length();
> assertThat(appenderFileLength).isEqualTo(1_000_000L);
> }
> }
> {code}
> Above test, given a {{letter}}, creates a {{ParallelRunTest-<letter>.log}}
> file, and writes 999 times that letter to each line (ending with a line-feed
> character) for 1,000 lines. The eventual file is expected to be of size
> {{lineLength * lineCount = 1,000 * 1,000 = 1e6}}. These assumptions indeed
> hold after a run:
> {code}
> wc -l /tmp/ParallelRunTest-*
> 1000 /tmp/ParallelRunTest-A.log
> 1000 /tmp/ParallelRunTest-B.log
> 1000 /tmp/ParallelRunTest-C.log
> 1000 /tmp/ParallelRunTest-D.log
> 4000 total
> for log_file in /tmp/ParallelRunTest-*; do
> echo "file: $log_file"
> echo "line lengths:"
> awk "{print length}" $log_file | sort | uniq
> echo "distinct lines:"
> sort $log_file | uniq
> echo
> done
> file: /tmp/ParallelRunTest-A.log
> line lengths:
> 999
> distinct lines:
> AAA...A
> file: /tmp/ParallelRunTest-B.log
> line lengths:
> 999
> distinct lines:
> BBB...B
> file: /tmp/ParallelRunTest-C.log
> line lengths:
> 999
> distinct lines:
> CCC...C
> file: /tmp/ParallelRunTest-D.log
> line lengths:
> 999
> distinct lines:
> DDD...D
> {code}
> Though when {{ExecutionMode.SAME_THREAD}} is replaced with
> {{ExecutionMode.CONCURRENT}} tests fail due to unexpected
> {{appenderFileLength}}:
> {code}
> wc -l /tmp/ParallelRunTest-*
> 96 /tmp/ParallelRunTest-A.log
> 1 /tmp/ParallelRunTest-B.log
> 96 /tmp/ParallelRunTest-C.log
> 3577 /tmp/ParallelRunTest-D.log
> 3770 total
> for log_file in /tmp/ParallelRunTest-*; do
> echo "file: $log_file"
> echo "line lengths:"
> awk "{print length}" $log_file | sort | uniq
> echo "distinct lines:"
> sort $log_file | uniq
> echo
> done
> file: /tmp/ParallelRunTest-A.log
> line lengths:
> 999
> distinct lines:
> AAA...A
> BBB...B
> CCC...C
> file: /tmp/ParallelRunTest-B.log
> line lengths:
> 999
> distinct lines:
> BBB...B
> file: /tmp/ParallelRunTest-C.log
> line lengths:
> 999
> distinct lines:
> BBB...B
> CCC...C
> file: /tmp/ParallelRunTest-D.log
> line lengths:
> 999
> distinct lines:
> AAA...A
> BBB...B
> CCC...C
> DDD...D
> {code}
> Note that {{AAA...A}}, {{BBB...B}}, etc. snippets shared above are truncated
> for display purposes, otherwise each line has 999 _unscrambled_ characters.
> Given 1) the total number of logged lines is less than 4000, 2) letters are
> scattered between files, and 3) lines are not scrambled, I have the feeling
> that produced {{LogEvent}}s either got lost or sent to wrong appenders.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)