[
https://issues.apache.org/jira/browse/YARN-1670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13942507#comment-13942507
]
Hadoop QA commented on YARN-1670:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12635891/YARN-1670-v2.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-YARN-Build/3414//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3414//console
This message is automatically generated.
> aggregated log writer can write more log data then it says is the log length
> ----------------------------------------------------------------------------
>
> Key: YARN-1670
> URL: https://issues.apache.org/jira/browse/YARN-1670
> Project: Hadoop YARN
> Issue Type: Bug
> Affects Versions: 3.0.0, 0.23.10, 2.2.0
> Reporter: Thomas Graves
> Assignee: Mit Desai
> Priority: Critical
> Attachments: YARN-1670-b23.patch, YARN-1670-v2-b23.patch,
> YARN-1670-v2.patch, YARN-1670.patch, YARN-1670.patch
>
>
> We have seen exceptions when using 'yarn logs' to read log files.
> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Long.parseLong(Long.java:441)
> at java.lang.Long.parseLong(Long.java:483)
> at
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.readAContainerLogsForALogType(AggregatedLogFormat.java:518)
> at
> org.apache.hadoop.yarn.logaggregation.LogDumper.dumpAContainerLogs(LogDumper.java:178)
> at
> org.apache.hadoop.yarn.logaggregation.LogDumper.run(LogDumper.java:130)
> at
> org.apache.hadoop.yarn.logaggregation.LogDumper.main(LogDumper.java:246)
> We traced it down to the reader trying to read the file type of the next file
> but where it reads is still log data from the previous file. What happened
> was the Log Length was written as a certain size but the log data was
> actually longer then that.
> Inside of the write() routine in LogValue it first writes what the logfile
> length is, but then when it goes to write the log itself it just goes to the
> end of the file. There is a race condition here where if someone is still
> writing to the file when it goes to be aggregated the length written could be
> to small.
> We should have the write() routine stop when it writes whatever it said was
> the length. It would be nice if we could somehow tell the user it might be
> truncated but I'm not sure of a good way to do this.
> We also noticed that a bug in readAContainerLogsForALogType where it is using
> an int for curRead whereas it should be using a long.
> while (len != -1 && curRead < fileLength) {
> This isn't actually a problem right now as it looks like the underlying
> decoder is doing the right thing and the len condition exits.
--
This message was sent by Atlassian JIRA
(v6.2#6252)