[
https://issues.apache.org/jira/browse/HADOOP-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15343802#comment-15343802
]
Chris Nauroth commented on HADOOP-13310:
----------------------------------------
Here is an example that demonstrates the problem. An awk script performs
positional parsing on {{hadoop fs -ls}} output to determine the length of a
file. This script would work fine with HDFS, but it would fail if the same
script was retargeted to an S3A URI, because there is no group in the ls output
for S3A.
If {{S3AFileStatus#getGroup}} returned a stubbed string, such as "nobody" or
"dr.who", then scripts wouldn't have this problem.
{code}
> hdfs dfs -ls /dir1/file1
-rw-rw---- 3 chris supergroup 6 2016-06-21 23:33 /dir1/file1
> hadoop fs -ls s3a://cnauroth-test-aws-s3a/dir1/file1
-rw-rw-rw- 1 chris 6 2016-06-21 23:32
s3a://cnauroth-test-aws-s3a/dir1/file1
> hdfs dfs -ls /dir1/file1 | awk '{ print $5 }'
6
> hadoop fs -ls s3a://cnauroth-test-aws-s3a/dir1/file1 | awk '{ print $5 }'
2016-06-21
{code}
> S3A reporting of file group as null is harmful to compatibility for the shell.
> ------------------------------------------------------------------------------
>
> Key: HADOOP-13310
> URL: https://issues.apache.org/jira/browse/HADOOP-13310
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Chris Nauroth
> Priority: Minor
>
> S3A does not persist group information in file metadata. Instead, it stubs
> the value of the group to an empty string. Although the JavaDocs for
> {{FileStatus#getGroup}} indicate that empty string is a possible return
> value, this is likely to cause compatibility problems. Most notably, shell
> scripts that expect to be able to perform positional parsing on the output of
> things like {{hadoop fs -ls}} will stop working if retargeted from HDFS to
> S3A.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]