[
https://issues.apache.org/jira/browse/FALCON-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041447#comment-15041447
]
Pallavi Rao commented on FALCON-1647:
-------------------------------------
Have never been in favor of having 777 permission even for staging dir. Anyone
can accidentally delete all dirs under it and I believe that is [~peeyushb]'s
concern too.
But, I understand and appreciate the problem [~bvellanki] is trying to solve.
If staging dir is in user space, Falcon won't have permission to write to it.
If staging is in Falcon space (as is now), all users will need to be given
permission to it.
Tricky... Allow me to propose a sub-optimal solution, yet better than a
permanent 777 :-).
Let "falcon" be the owner of <staging_dir>/falcon/workflows/{feed,process} with
755 permission. When "user1" schedules "entity1", we do the following:
1. Change the permission of <staging_dir>/falcon/workflows/{feed,process} to
777 as "falcon".
2. Create "entity1" directory under it as "user1" with default permission
3. Change back permission of <staging_dir>/falcon/workflows/{feed,process} to
755.
4. Create artifacts as "user1" under
<staging_dir>/falcon/workflows/{feed,process}/entity1
When 2 users absolutely simultaneously try to schedule 2 entities, there might
be a small window (when perm is changed back to 755), when this will fail. In
such a case, the user will have to retry. We just have to return the right
message.
> Unable to create feed : FilePermission error under cluster staging directory
> ----------------------------------------------------------------------------
>
> Key: FALCON-1647
> URL: https://issues.apache.org/jira/browse/FALCON-1647
> Project: Falcon
> Issue Type: Bug
> Components: feed
> Affects Versions: 0.8
> Reporter: Balu Vellanki
> Assignee: Balu Vellanki
> Fix For: 0.9
>
> Attachments: FALCON-1647.patch
>
>
> Submit a cluster entity as user "user1", schedule a feed entity as "user1".
> Now submit and schedule a feed entity as "user2" and feed submission can fail
> with the following error
> {code}
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=user2, access=WRITE,
> inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
> {code}
> This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process only when a feed/process entity are
> scheduled. The owner of these dirs is the user scheduling the entity. The
> permissions are based on the default umask of the FS. If a new feed/process
> entity are being scheduled by a different user, things can fail.
> Solution is to make <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process owned by Falcon with permissions 777.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)