Hello Shashi,

It appears that you have applied a default ACL to /user, then attempted to put 
a file in /user, and you are expecting the default ACL to grant authorization 
for user sashi to do that.  A default ACL does not influence the actual 
permission checks performed by HDFS, so if user sashi does not have the 
necessary access through simple HDFS permissions, then the default ACL won’t 
grant access.

If your goal is to allow user sashi to put a file into /user, then perhaps what 
you want to do is add an access ACL entry instead of a default ACL entry.  To 
do that, remove the "default:" prefix from the ACL entry in your setfacl 
command, so "user:sashi:rwx".

A default ACL only defines what ACL entries automatically get applied to new 
directories and files that get created under that directory.  Note that 
applying a default ACL does not alter anything for sub-directories that already 
exist.  The default ACL is copied from parent to child at the time of creation 
of the file or sub-directory.  In your example, if /user/test1, /user/test2 and 
/user/test3 already existed before you ran the setfacl command, then nothing 
would have been changed for those directories.  However, if after the setfacl 
command, you ran something like "hdfs dfs -mkdir /users/test4", then the 
default ACL of /user would be copied down to /users/test4 as both its default 
ACL and access ACL.

For more details on the differences between an access ACL and a default ACL, 
please refer to the HDFS Permissions Guide documentation.

http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists

--Chris Nauroth

From: Shashi Vishwakarma <shashi.vish...@gmail.com>
Date: Monday, September 19, 2016 at 12:16 AM
To: Rakesh Radhakrishnan <rake...@apache.org>
Cc: "user.hadoop" <user@hadoop.apache.org>
Subject: Re: HDFS ACL | Unable to define ACL automatically for child folders

Thanks a lot Rakesh. Above information is very much helpful.

Thanks
Shashi

On Mon, Sep 19, 2016 at 12:39 PM, Rakesh Radhakrishnan 
<rake...@apache.org<mailto:rake...@apache.org>> wrote:
AFAIK, there is no java API available. Perhaps you could do recursive directory 
listing for a path and invokes #setAcl java api for each.
https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/FileSystem.html#setAcl(org.apache.hadoop.fs.Path,
 java.util.List)

Rakesh

On Mon, Sep 19, 2016 at 11:22 AM, Shashi Vishwakarma 
<shashi.vish...@gmail.com<mailto:shashi.vish...@gmail.com>> wrote:

Thanks Rakesh.

Just last question, is there any Java API available for recursively applying 
ACL or I need to iterate on all folders of dir and apply acl for each?

Thanks
Shashi

On 19 Sep 2016 9:56 am, "Rakesh Radhakrishnan" 
<rake...@apache.org<mailto:rake...@apache.org>> wrote:
It looks like '/user/test3' has owner '"hdfs" and denying the access while 
performing operations via "shashi" user. One idea is to recursively set ACL to 
sub-directories and files as follows:

             hdfs dfs -setfacl -R -m default:user:shashi:rwx /user

            -R, option can be used to apply operations to all files and 
directories recursively.

Regards,
Rakesh

On Sun, Sep 18, 2016 at 8:53 PM, Shashi Vishwakarma 
<shashi.vish...@gmail.com<mailto:shashi.vish...@gmail.com>> wrote:
I have following scenario. There is parent folder /user with five child folder 
as test1 , test2, test3 etc in HDFS.

    /user/test1
    /user/test2
    /user/test3

I applied acl on parent folder to make sure user has automatically access to 
child folder.

     hdfs dfs -setfacl -m default:user:shashi:rwx /user


but when i try to put some file , it is giving permission denied exception

    hadoop fs -put test.txt  /user/test3
    put: Permission denied: user=shashi, access=WRITE, 
inode="/user/test3":hdfs:supergroup:drwxr-xr-x

**getfacl output**

    hadoop fs -getfacl /user/test3
    # file: /user/test3
    # owner: hdfs
    # group: supergroup
    user::rwx
    group::r-x
    other::r-x

Any pointers on this?

Thanks
Shashi



Reply via email to