[
https://issues.apache.org/jira/browse/FALCON-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041671#comment-15041671
]
Balu Vellanki edited comment on FALCON-1647 at 12/4/15 5:59 PM:
----------------------------------------------------------------
[~pallavi.rao] and [~peeyushb] :
While Pallavi's solution is good, it can introduce failures during concurrent
entity schedule requests. I think we should not introduce potential failures ,
and use 777 when
- Multiple users belonging to different groups need rw permissions to the same
dir.
- If the data under this dir is lost, it can be recovered.
- the data written under this dir does not have to be secure
The staging dir is only used when scheduling an entity. Accidental deletion
(which is an extremely rare scenario) of data under this dir does not break any
scheduled feed/process. When a feed/process is re-scheduled, the staging dirs
will be created again.
was (Author: bvellanki):
[~pallavi.rao] and [~peeyushb] :
While Pallavi's solution is good, it can introduce failures during concurrent
entity schedule requests. I think we should not introduce potential failures ,
and use 777 when
- Multiple users belonging to the same group need rw permissions to the same
dir.
- If the data under this dir is lost, it can be recovered.
- the data written under this dir does not have to be secure
The staging dir is only used when scheduling an entity. Accidental deletion
(which is an extremely rare scenario) of data under this dir does not break any
scheduled feed/process. When a feed/process is re-scheduled, the staging dirs
will be created again.
> Unable to create feed : FilePermission error under cluster staging directory
> ----------------------------------------------------------------------------
>
> Key: FALCON-1647
> URL: https://issues.apache.org/jira/browse/FALCON-1647
> Project: Falcon
> Issue Type: Bug
> Components: feed
> Affects Versions: 0.8
> Reporter: Balu Vellanki
> Assignee: Balu Vellanki
> Fix For: 0.9
>
> Attachments: FALCON-1647.patch
>
>
> Submit a cluster entity as user "user1", schedule a feed entity as "user1".
> Now submit and schedule a feed entity as "user2" and feed submission can fail
> with the following error
> {code}
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=user2, access=WRITE,
> inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
> {code}
> This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process only when a feed/process entity are
> scheduled. The owner of these dirs is the user scheduling the entity. The
> permissions are based on the default umask of the FS. If a new feed/process
> entity are being scheduled by a different user, things can fail.
> Solution is to make <staging_dir>/falcon/workflows/feed and
> <staging_dir>/falcon/workflows/process owned by Falcon with permissions 777.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)