[
https://issues.apache.org/jira/browse/HDFS-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501276#comment-14501276
]
Hadoop QA commented on HDFS-8179:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12726347/HDFS-8179.00.patch
against trunk revision f47a576.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.TestFileCreation
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/10311//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HDFS-Build/10311//console
This message is automatically generated.
> DFSClient#getServerDefaults returns null within 1 hour of system start
> ----------------------------------------------------------------------
>
> Key: HDFS-8179
> URL: https://issues.apache.org/jira/browse/HDFS-8179
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Xiaoyu Yao
> Assignee: Xiaoyu Yao
> Priority: Blocker
> Attachments: HDFS-8179.00.patch
>
>
> We recently hit NPE during Ambari Oozie service check. The failed hdfs
> command is below. It repros sometimes and then go away after the cluster runs
> for a while.
> {code}
> [ambari-qa@c6401 ~]$ hadoop --config /etc/hadoop/conf fs -rm -r
> /user/ambari-qa/mapredsmokeoutput
> rm: Failed to get server trash configuration: null. Consider using -skipTrash
> option
> {code}
> With additional tracing, the failure was located to the following stack.
> {code}
> 15/04/17 20:57:12 DEBUG fs.Trash: Failed to get server trash configuration
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:86)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:117)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:104)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:321)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:293)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:275)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:259)
> at
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:205)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:166)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> rm: Failed to get server trash configuration: null. Consider using -skipTrash
> option
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)