----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/40894/#review108928 -----------------------------------------------------------
Ship it! Nitpick: In createStagingSubdirs, you try to create two subdirectories and these two pieces of code have overlaps. To simply the code, better to define one function, e.g. createStagingSubdir, to create one directory at a time, and call this function twice to create two subdirectories 'feed' and 'process' in your case. - Ying Zheng On Dec. 3, 2015, 4:56 a.m., Balu Vellanki wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/40894/ > ----------------------------------------------------------- > > (Updated Dec. 3, 2015, 4:56 a.m.) > > > Review request for Falcon and Venkat Ranganathan. > > > Bugs: falcon-1647 > https://issues.apache.org/jira/browse/falcon-1647 > > > Repository: falcon-git > > > Description > ------- > > Submit a cluster entity as user "user1", schedule a feed entity as "user1". > Now submit and schedule a feed entity as "user2" and feed submission can fail > with the following error > > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=user2, access=WRITE, > inode="/apps/falcon-user1/staging/falcon/workflows/feed":user1:falcon:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238) > > This is caused because Falcon creates <staging_dir>/falcon/workflows/feed and > <staging_dir>/falcon/workflows/process only when a feed/process entity are > scheduled. The owner of these dirs is the user scheduling the entity. The > permissions are based on the default umask of the FS. If a new feed/process > entity are being scheduled by a different user, things can fail. > Solution is to make <staging_dir>/falcon/workflows/feed and > <staging_dir>/falcon/workflows/process owned by Falcon with permissions 777. > > > Diffs > ----- > > > common/src/main/java/org/apache/falcon/entity/parser/ClusterEntityParser.java > b4f61d7 > > common/src/test/java/org/apache/falcon/entity/parser/ClusterEntityParserTest.java > cd61a8c > > Diff: https://reviews.apache.org/r/40894/diff/ > > > Testing > ------- > > end2end testing done, submitted a cluster as ambari-qa, scheduled a feed as > user ambari-qa and another feed as user root. > > > Thanks, > > Balu Vellanki > >
