[ 
https://issues.apache.org/jira/browse/HDFS-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319384#comment-15319384
 ] 

Chris Nauroth commented on HDFS-10488:
--------------------------------------

bq. So, Chris Nauroth, summarizing, fs.permissions.umask-mode should not be 
applied for WebHDFS created directories/files.

I think a slight refinement of this is to say that it should not be applied by 
the WebHDFS server side (the NameNode).  It may be applied by the WebHDFS 
client side.  For example, the {{WebHdfsFileSystem}} class that ships in Hadoop 
does apply {{fs.permissions.umask-mode}} from the client side before calling 
the WebHDFS server side.

bq. While working on this, I had found out the default permission (if no 
permission is specified while calling the method) for both directories and 
files created by WebHDFS currently is 755. However, defining "execution" 
permissions for HDFS files don't have any value. Should this be changed to give 
different default permissions for files and directories?

This part is admittedly odd, and there is a long-standing open JIRA requesting 
a change to 644 as the default for files.  That is HDFS-6434.  This change is 
potentially backwards-incompatible, such as if someone has an existing workflow 
that round-trips a file through HDFS and expects it to be executable after 
getting it back out, though that's likely a remote edge case.  If you'd like to 
proceed with HDFS-6434, then I'd suggest targeting trunk/Hadoop 3.x, where we 
currently can make backwards-incompatible changes.

bq. Still on the default values, setting 755 as default can lead to confusion 
about umask being used. Since default umask is 022, users can conclude that the 
umask is being applied when they see newly created directories got 755. Should 
this be changed to more permissive permissions such as 777?

I do think 777 makes sense from one perspective, but there is also a trade-off 
with providing behavior that is secure by default.  In HDFS-2427, the project 
made the choice to go with 755, favoring secure default behavior (755) over the 
possibly more intuitive behavior (777).

bq. When working on tests for WebHDFS CREATESYMLINK as suggested by Wei-Chiu 
Chuang, I realized this method is no longer supported. Should we simply remove 
from WebHDFS, or only document this is not supported anymore and leave it 
giving the current error?

HDFS symlinks are currently in a state where the code is partially completed 
but dormant due to unresolved problems with backwards-compatibility and 
security.  We might get past those hurdles someday, so I suggest leaving that 
code as is.  We still run tests against the symlink code paths.  This works by 
having the tests call the private {{FileSystem#enableSymlinks}} method to 
toggle on the dormant symlink code.

> WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating 
> files/directories without specifying permissions
> ------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10488
>                 URL: https://issues.apache.org/jira/browse/HDFS-10488
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 2.6.0
>            Reporter: Wellington Chevreuil
>            Priority: Minor
>         Attachments: HDFS-10488.002.patch, HDFS-10488.003.patch, 
> HDFS-10488.patch
>
>
> WebHDFS methods for creating file/directories are always creating it with 755 
> permissions as default, even ignoring any configured 
> *fs.permissions.umask-mode* in the case of directories.
> Dfs CLI, however, applies the configured umask to 777 permission for 
> directories, or 666 permission for files.
> Example below shows the different behaviour when creating directory via CLI 
> and WebHDFS:
> {noformat}
> 1) Creating a directory under '/test/' as 'test-user'. Configured 
> fs.permissions.umask-mode is 000: 
> $ sudo -u test-user hdfs dfs -mkdir /test/test-user1 
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user1 
> # file: /test/test-user1
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::rwx 
> other::rwx 
> 4) Doing the same via WebHDFS does not get the proper ACLs: 
> $ curl -i -X PUT 
> "http://namenode-host:50070/webhdfs/v1/test/test-user2?user.name=test-user&op=MKDIRS";
>  
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user2 
> # file: /test/test-user2 
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::r-x 
> other::r-x
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to