[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10354:
-----------------------------------

    Attachment: HADOOP-10354.1.patch

I'm attaching a patch with what I suspect is the fix.  [~yzhangal] or 
[~jingzhao], would one of you please try the patch in your environment to see 
if it works?  I'm not set up for a repro at the moment.

Here is a bit of background on this.  HADOOP-10220 enhanced {{FsPermission}} on 
the feature branch to add an ACL bit, indicating if the file has an ACL.  We 
then realized that the ACL bit had some problematic effects on the rest of the 
design, so we reverted it in HDFS-5923 and did something else.  However, we 
forgot to revert a change I had made in {{RawLocalFileSystem}}.  Previously, 
there had been a special case for ignoring the extra character on the 
permission string for local file systems using Smack or ACLs.  We can simply 
restore the old version of that code now that the ACL bit is gone.

Jenkins will -1 this patch for no new tests, but there isn't any way to write a 
reliable test for this, since it depends on the semantics of the dev 
environment's underlying local file system.

> merge of HDFS-4685 to trunk introduced trunk test failure
> ---------------------------------------------------------
>
>                 Key: HADOOP-10354
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10354
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 3.0.0
>         Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>            Reporter: Yongjun Zhang
>            Assignee: Chris Nauroth
>         Attachments: HADOOP-10354.1.patch
>
>
> HI,
> I'm seeing trunk branch test failure locally (centOs6) today. And I 
> identified it's this commit that caused the failure. 
> Author: Chris Nauroth <[email protected]>  2014-02-19 10:34:52
> Committer: Chris Nauroth <[email protected]>  2014-02-19 10:34:52
> Parent: 7215d12fdce727e1f4bce21a156b0505bd9ba72a (YARN-1666. Modified RM HA 
> handling of include/exclude node-lists to be available across RM failover by 
> making using of a remote configuration-provider. Contributed by Xuan Gong.)
> Parent: 603ebb82b31e9300cfbf81ed5dd6110f1cb31b27 (HDFS-4685. Correct minor 
> whitespace difference in FSImageSerialization.java in preparation for trunk 
> merge.)
> Child:  ef8a5bceb7f3ce34d08a5968777effd40e0b1d0f (YARN-1171. Add default 
> queue properties to Fair Scheduler documentation (Naren Koneru via Sandy 
> Ryza))
> Branches: remotes/apache/HDFS-5535, remotes/apache/trunk, testv10, testv3, 
> testv4, testv7
> Follows: testv5
> Precedes: 
>     Merge HDFS-4685 to trunk.
>     
>     git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1569870 
> 13f79535-47bb-0310-9956-ffa450edef68
> I'm not sure whether other folks are seeing the same, or maybe related to my 
> environment. But prior to chis change, I don't see this problem.
> The failures are in TestWebHDFS:
> Running org.apache.hadoop.hdfs.web.TestWebHDFS
> Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 3.687 sec <<< 
> FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
> testLargeDirectory(org.apache.hadoop.hdfs.web.TestWebHDFS)  Time elapsed: 
> 2.478 sec  <<< ERROR!
> java.lang.IllegalArgumentException: length != 
> 10(unixSymbolicPermission=drwxrwxr-x.)
>         at 
> org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
>         at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
>         at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
>         at 
> org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory(TestWebHDFS.java:229)
> testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS)  Time elapsed: 
> 0.342 sec  <<< ERROR!
> java.lang.IllegalArgumentException: length != 
> 10(unixSymbolicPermission=drwxrwxr-x.)
>         at 
> org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
>         at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
>         at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:359)
>         at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
>         at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:886)
>         at 
> org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:216)
> ......
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to