[
https://issues.apache.org/jira/browse/FALCON-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044536#comment-15044536
]
Pallavi Rao commented on FALCON-1647:
-------------------------------------
[~bvellanki], I know companies where 777 permission is treated as a security
violation. So, in the longer run it may not be good to enforce this requirement.
Another approach could be that Falcon can ensure (with checks) to see if it can
"write" to the appropriate staging dir. Leave it up to the admin to decide how
he/she wants to provide that permission. It could either be that:
1. They provide 777 permission.
2. They could add "falcon", "user1", "user2" etc. to a group (lets say,
"falcon_users") and give the dir 775 permission.
Having said that, I don't want to block this JIRA because the 777 permission is
not introduced by this JIRA. It has been an existing limitation/requirement.
This JIRA only extends it to sub-directories. We can file another JIRA to get
rid of this requirement and move forward with this patch. Sounds good?
> Unable to create feed : FilePermission error under cluster staging directory
> ----------------------------------------------------------------------------
>
> Key: FALCON-1647
> URL: https://issues.apache.org/jira/browse/FALCON-1647
> Project: Falcon
> Issue Type: Bug
> Components: feed
> Affects Versions: 0.8
> Reporter: Balu Vellanki
> Assignee: Balu Vellanki
> Fix For: 0.9
>
> Attachments: FALCON-1647.patch
>
>
> Submit a cluster entity as user "user1", schedule a feed entity as "user1".
> Now submit and schedule a feed entity as "user2" and feed submission can fail
> with the following error
> {code}
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=user2, access=WRITE,
> inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
> {code}
> This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process only when a feed/process entity are
> scheduled. The owner of these dirs is the user scheduling the entity. The
> permissions are based on the default umask of the FS. If a new feed/process
> entity are being scheduled by a different user, things can fail.
> Solution is to make <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process owned by Falcon with permissions 777.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)