[
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17696538#comment-17696538
]
ASF GitHub Bot commented on HADOOP-18631:
-----------------------------------------
virajjasani commented on PR #5451:
URL: https://github.com/apache/hadoop/pull/5451#issuecomment-1455025728
> Do you mean to say we are reading via different mechanisms but reading
from the same place?
Basically they will read from their own WriterAppender so there is no sync
issue while reading.
The beauty of using `LogCapturer` is that every test that uses it gets to
use their own appender. Hence, every test with new instance of LogCapturer will
capture the logs of the given logger (audit log in our case) in their own
WriterAppender.
> The Appender logic will be catching output from other tests as well when
run in Parallel?
Absolutely, every test has their own new appender instance so no two tests
using different instance of LogCapturer has to worry about reading from common
place (like console or file) as they have their own private appender to read
the logs from. This part does that nice logic:
```
private LogCapturer(Logger logger) {
this.logger = logger;
Appender defaultAppender =
Logger.getRootLogger().getAppender("stdout");
if (defaultAppender == null) {
defaultAppender = Logger.getRootLogger().getAppender("console");
}
final Layout layout = (defaultAppender == null) ? new PatternLayout() :
defaultAppender.getLayout();
this.appender = new WriterAppender(layout, sw);
logger.addAppender(this.appender);
}
```
So, let's say tomorrow we write a new test that also needs to read namenode
audit logs, the test should just create new LogCapturer object and extract logs
from it, that's it. It doesn't interfere with any other tests, if run
simultaneously. TestFsck has it's own writer appender. TestAuditLogs on the
other hand is reading directly from the file which is used by log4j properties
as primary RFA appender, which I think is good so that at least we have one
test that also validates output from primary appender directly and verifies
regex. Now even if TestAuditLogs deletes the file such that every log produced
by TestFsck are gone, it's still not a concern for TestFsck because TestFsck
has it's own writer appender instance as a secondary appender and as part of
reading logs, it now uses this secondary appender and no longer relies on
primary appender (file).
FWIW, I believe LogCapturer is really nice utility.
> Migrate Async appenders to log4j properties
> -------------------------------------------
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we
> add "dynamically in the code" to the log4j.properties file. Instead of using
> core/hdfs site configs, log4j properties or system properties should be used
> to determine if the given logger should use async appender.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]