[
https://issues.apache.org/jira/browse/HDFS-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907734#comment-13907734
]
Jing Zhao commented on HDFS-5989:
---------------------------------
I've met the same test failure also. The cause of the test failure in my
machine is that I enabled ACL in my macbook. After disabling it the test
passed.
But in the meanwhile, do we want to loose the check a little bit for the unit
test?
> merge of HDFS-4685 to trunk introduced trunk test failure
> ---------------------------------------------------------
>
> Key: HDFS-5989
> URL: https://issues.apache.org/jira/browse/HDFS-5989
> Project: Hadoop HDFS
> Issue Type: Bug
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
> Reporter: Yongjun Zhang
>
> HI,
> I'm seeing trunk branch test failure locally (centOs6) today. And I
> identified it's this commit that caused the failure.
> Author: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
> Committer: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
> Parent: 7215d12fdce727e1f4bce21a156b0505bd9ba72a (YARN-1666. Modified RM HA
> handling of include/exclude node-lists to be available across RM failover by
> making using of a remote configuration-provider. Contributed by Xuan Gong.)
> Parent: 603ebb82b31e9300cfbf81ed5dd6110f1cb31b27 (HDFS-4685. Correct minor
> whitespace difference in FSImageSerialization.java in preparation for trunk
> merge.)
> Child: ef8a5bceb7f3ce34d08a5968777effd40e0b1d0f (YARN-1171. Add default
> queue properties to Fair Scheduler documentation (Naren Koneru via Sandy
> Ryza))
> Branches: remotes/apache/HDFS-5535, remotes/apache/trunk, testv10, testv3,
> testv4, testv7
> Follows: testv5
> Precedes:
> Merge HDFS-4685 to trunk.
>
> git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1569870
> 13f79535-47bb-0310-9956-ffa450edef68
> I'm not sure whether other folks are seeing the same, or maybe related to my
> environment. But prior to chis change, I don't see this problem.
> The failures are in TestWebHDFS:
> Running org.apache.hadoop.hdfs.web.TestWebHDFS
> Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 3.687 sec <<<
> FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
> testLargeDirectory(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed:
> 2.478 sec <<< ERROR!
> java.lang.IllegalArgumentException: length !=
> 10(unixSymbolicPermission=drwxrwxr-x.)
> at
> org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
> at
> org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory(TestWebHDFS.java:229)
> testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed:
> 0.342 sec <<< ERROR!
> java.lang.IllegalArgumentException: length !=
> 10(unixSymbolicPermission=drwxrwxr-x.)
> at
> org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
> at
> org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:886)
> at
> org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:216)
> ......
> Thanks.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)