[ 
https://issues.apache.org/jira/browse/HDFS-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057293#comment-14057293
 ] 

J.Andreina commented on HDFS-6654:
----------------------------------

I was confused by looking at Test-Plan-for-Extended-Acls-2.pdf attached in 
HDFS-4685 . First scenairo mentioned in the issue works fine by giving 
executable permissions to User1. 

It would be helpful , if the following scenario is been updated in the 
Testplan. 


Scenario No : 18 
Summary     :
        set extended acl to grant Dan and Carla read acess.
 
                hdfs dfs -chmod -R 640 /user/bruce/ParentDir
                hdfs dfs -setfacl -R -m user:Dan:r--, user:Carla:r-- 
/user/bruce/ParentDir
                hdfs dfs -getfacl -R /user/bruce/ParentDir
Expected Result: 
                Extended Acls should be applied to all the files/Dirs inside 
ParentDir

In the above summary instead of giving just read permissions , executable 
permissions should also be given as below

        hdfs dfs -setfacl -R -m user:Dan:r-x, user:Carla:r-x 
/user/bruce/ParentDir

> Setting Extended ACLs recursively for  another user belonging to the same 
> group  is not working
> -----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6654
>                 URL: https://issues.apache.org/jira/browse/HDFS-6654
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.4.1
>            Reporter: J.Andreina
>
> {noformat}
> 1.Setting Extended ACL recursively for  a user belonging to the same group  
> is not working
> {noformat}
> Step 1: Created a Dir1 with User1
>                 ./hdfs dfs -rm -R /Dir1
> Step 2: Changed the permission (600) for Dir1 recursively
>                ./hdfs dfs -chmod -R 600 /Dir1
> Step 3: setfacls is executed to give read and write permissions to User2 
> which belongs to the same group as User1
>                ./hdfs dfs -setfacl -R -m user:User2:rw- /Dir1
>                ./hdfs dfs -getfacl -R /Dir1
>                          No GC_PROFILE is given. Defaults to medium.
>                        # file: /Dir1
>                        # owner: User1
>                        # group: supergroup
>                        user::rw-
>                        user:User2:rw-
>                        group::---
>                        mask::rw-
>                        other::---
> Step 4: Now unable to write a File to Dir1 from User2
>            ./hdfs dfs -put hadoop /Dir1/1
> No GC_PROFILE is given. Defaults to medium.
> put: Permission denied: user=User2, access=EXECUTE, 
> inode="/Dir1":User1:supergroup:drw------
> {noformat}
>    2. Fetching filesystem name , when one of the disk configured for NN dir 
> becomes full returns a value "null".
> {noformat}
> 2014-07-08 09:23:43,020 WARN 
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
> available on volume 'null' is 101060608, which is below the configured 
> reserved amount 104857600
> 2014-07-08 09:23:43,020 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2014-07-08 09:23:43,166 WARN 
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
> available on volume 'null' is 101060608, which is below the configured 
> reserved amount 104857600
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to