[
https://issues.apache.org/jira/browse/HDFS-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969339#comment-14969339
]
Hadoop QA commented on HDFS-9284:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch | 24m 36s | Pre-patch trunk has 1 extant
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear
to include any new or modified tests. Please justify why no new tests are
needed for this patch. Also please list what manual steps were performed to
verify this patch. |
| {color:green}+1{color} | javac | 10m 58s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 15m 13s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 31s | The applied patch does
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle | 1m 58s | The applied patch generated 1
new checkstyle issues (total was 40, now 41). |
| {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that
end in whitespace. |
| {color:green}+1{color} | install | 2m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 58s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 3m 45s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native | 5m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 66m 6s | Tests failed in hadoop-hdfs. |
| | | 132m 1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
| | hadoop.hdfs.TestRecoverStripedFile |
| | hadoop.hdfs.TestDistributedFileSystem |
| | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestStripedINodeFile
|
| | org.apache.hadoop.hdfs.server.namenode.TestQuotaByStorageType |
| | org.apache.hadoop.hdfs.server.namenode.TestFsck |
| | org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12768035/HDFS-9284_00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 381610d |
| Pre-patch Findbugs warnings |
https://builds.apache.org/job/PreCommit-HDFS-Build/13124/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/13124/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/13124/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/13124/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13124/console |
This message was automatically generated.
> fsck command should not print exception trace when file not found
> ------------------------------------------------------------------
>
> Key: HDFS-9284
> URL: https://issues.apache.org/jira/browse/HDFS-9284
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Jagadesh Kiran N
> Assignee: Jagadesh Kiran N
> Attachments: HDFS-9284_00.patch
>
>
> when file doesnt exist fsck throws exception
> {code}
> ./hdfs fsck /kiran
> {code}
> the following exception occurs
> {code}
> WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
> platform... using builtin-java classes where applicable
> FileSystem is inaccessible due to:
> java.io.FileNotFoundException: File does not exist: /kiran
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1273)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1265)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1265)
> at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:755)
> at org.apache.hadoop.hdfs.tools.DFSck.getResolvedPath(DFSck.java:236)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:316)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:73)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:155)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:152)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:151)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:383)
> {code}
> but only {code } File does not exist: /kiran {code} error message should be
> thrown
> {code} } catch (IOException ioe) {
> System.err.println("FileSystem is inaccessible due to:\n"
> + StringUtils.stringifyException(ioe));
> }{code}
> i think it should use ioe.getmessage() method
> {code}
> } catch (IOException ioe) {
> System.err.println("FileSystem is inaccessible due to:\n"
> + StringUtils.stringifyException(ioe.getmessage()));
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)