[ 
https://issues.apache.org/jira/browse/FALCON-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046478#comment-15046478
 ] 

sandeep samudrala commented on FALCON-1647:
-------------------------------------------

Patch looks good.
But just by taking a step back, why should the directories be owned by the user 
who submitted it and why not falcon own it completely. In long run, Falcon 
should accept the changes (updates,schedules,submits,changes in lib paths) and 
mimic them in the staging directory, which cannot be modified by end user. 

In this way, Falcon will show various versions of how the mutations have 
happened over the time and avoid any accidental deletions or any other issues. 
Falcon should own the directories. 

Please comment/suggestions on the same in the below mentioned jira.. 
Raised the below jira for the same.
https://issues.apache.org/jira/browse/FALCON-1649

Will replicate the conversations over there.

> Unable to create feed : FilePermission error under cluster staging directory
> ----------------------------------------------------------------------------
>
>                 Key: FALCON-1647
>                 URL: https://issues.apache.org/jira/browse/FALCON-1647
>             Project: Falcon
>          Issue Type: Bug
>          Components: feed
>    Affects Versions: 0.8
>            Reporter: Balu Vellanki
>            Assignee: Balu Vellanki
>             Fix For: 0.9
>
>         Attachments: FALCON-1647.patch
>
>
> Submit a cluster entity as user "user1", schedule a feed entity as "user1".  
> Now submit and schedule a feed entity as "user2" and feed submission can fail 
> with  the following error 
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=user2, access=WRITE, 
> inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x
>    at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
>    at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
>    at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
> {code}
> This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and 
> <staging_dir>/falcon/workflows/process only when a feed/process entity are 
> scheduled. The owner of these dirs is the user scheduling the entity. The 
> permissions are based on the default umask of the FS.  If a new feed/process 
> entity are being scheduled by a different user, things can fail.
> Solution is to make <staging_dir>/falcon/workflows/feed and 
> <staging_dir>/falcon/workflows/process  owned by Falcon with permissions 777. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to