[
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HADOOP-10354:
-----------------------------------
Description: After merging HDFS-4685 to trunk, some dev environments are
experiencing a failure to parse a permission string in TestWebHDFS. The
problem appears to occur only in environments with security extensions enabled
on the local file system, such as Smack or ACLs. (was: HI,
I'm seeing trunk branch test failure locally (centOs6) today. And I identified
it's this commit that caused the failure.
Author: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
Committer: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
Parent: 7215d12fdce727e1f4bce21a156b0505bd9ba72a (YARN-1666. Modified RM HA
handling of include/exclude node-lists to be available across RM failover by
making using of a remote configuration-provider. Contributed by Xuan Gong.)
Parent: 603ebb82b31e9300cfbf81ed5dd6110f1cb31b27 (HDFS-4685. Correct minor
whitespace difference in FSImageSerialization.java in preparation for trunk
merge.)
Child: ef8a5bceb7f3ce34d08a5968777effd40e0b1d0f (YARN-1171. Add default queue
properties to Fair Scheduler documentation (Naren Koneru via Sandy Ryza))
Branches: remotes/apache/HDFS-5535, remotes/apache/trunk, testv10, testv3,
testv4, testv7
Follows: testv5
Precedes:
Merge HDFS-4685 to trunk.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1569870
13f79535-47bb-0310-9956-ffa450edef68
I'm not sure whether other folks are seeing the same, or maybe related to my
environment. But prior to chis change, I don't see this problem.
The failures are in TestWebHDFS:
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 3.687 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
testLargeDirectory(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed: 2.478
sec <<< ERROR!
java.lang.IllegalArgumentException: length !=
10(unixSymbolicPermission=drwxrwxr-x.)
at
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at
org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory(TestWebHDFS.java:229)
testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed:
0.342 sec <<< ERROR!
java.lang.IllegalArgumentException: length !=
10(unixSymbolicPermission=drwxrwxr-x.)
at
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at
org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:886)
at
org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:216)
......
Thanks.
)
Summary: TestWebHDFS fails after merge of HDFS-4685 to trunk (was:
merge of HDFS-4685 to trunk introduced trunk test failure)
Moving the stack trace provided by Yongjun to the comment section:
HI,
I'm seeing trunk branch test failure locally (centOs6) today. And I identified
it's this commit that caused the failure.
Author: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
Committer: Chris Nauroth <[email protected]> 2014-02-19 10:34:52
Parent: 7215d12fdce727e1f4bce21a156b0505bd9ba72a (YARN-1666. Modified RM HA
handling of include/exclude node-lists to be available across RM failover by
making using of a remote configuration-provider. Contributed by Xuan Gong.)
Parent: 603ebb82b31e9300cfbf81ed5dd6110f1cb31b27 (HDFS-4685. Correct minor
whitespace difference in FSImageSerialization.java in preparation for trunk
merge.)
Child: ef8a5bceb7f3ce34d08a5968777effd40e0b1d0f (YARN-1171. Add default queue
properties to Fair Scheduler documentation (Naren Koneru via Sandy Ryza))
Branches: remotes/apache/HDFS-5535, remotes/apache/trunk, testv10, testv3,
testv4, testv7
Follows: testv5
Precedes:
Merge HDFS-4685 to trunk.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1569870
13f79535-47bb-0310-9956-ffa450edef68
I'm not sure whether other folks are seeing the same, or maybe related to my
environment. But prior to chis change, I don't see this problem.
The failures are in TestWebHDFS:
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 3.687 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
testLargeDirectory(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed: 2.478
sec <<< ERROR!
java.lang.IllegalArgumentException: length !=
10(unixSymbolicPermission=drwxrwxr-x.)
at
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at
org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory(TestWebHDFS.java:229)
testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS) Time elapsed:
0.342 sec <<< ERROR!
java.lang.IllegalArgumentException: length !=
10(unixSymbolicPermission=drwxrwxr-x.)
at
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at
org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:886)
at
org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:216)
......
Thanks.
> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---------------------------------------------------
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
> Reporter: Yongjun Zhang
> Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a
> failure to parse a permission string in TestWebHDFS. The problem appears to
> occur only in environments with security extensions enabled on the local file
> system, such as Smack or ACLs.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)