[ 
https://issues.apache.org/jira/browse/HIVE-26887?focusedWorklogId=837251&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-837251
 ]

ASF GitHub Bot logged work on HIVE-26887:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Jan/23 14:49
            Start Date: 05/Jan/23 14:49
    Worklog Time Spent: 10m 
      Work Description: skysiders commented on PR #3894:
URL: https://github.com/apache/hive/pull/3894#issuecomment-1372314783

   Hi @cnauroth ,thank for your review!
   What I want to talk about here is a programming mode, which is implemented 
for special permissions. For example, we should check this permission after 
setting the special permission. You are right, the 700 permissions here will 
not cause defects with a high probability, but I think a good programming style 
should be maintained to ensure the correct granting of file permissions, so 
here I first judge the file permissions and check the file permissions Whether 
it is correctly granted, if not, then set the permissions for them. However, if 
the permissions here are changed to 770 or other permissions that may be 
affected by the umask in the future, there will be defects. Therefore, from a 
rigorous point of view, I think it is necessary to further check the 
permissions here.
   
   I have mentioned similar flaws in 
[STORM-3862](https://github.com/apache/storm/pull/3478) 
,[TEZ-4412](https://github.com/apache/tez/pull/209),[HBASE-26994](https://github.com/apache/hbase/pull/4391)
   
   Finally, regarding the FileSystem.mkdirs(fs, path, perm) you mentioned, I 
agree with your point of view, but what I still want to say is that the idea of 
this kind of repair is not to set permissions on files, but out of file 
permissions Check, as shown in the code, if the permissions are correct, we 
don't need to set permissions on it again.




Issue Time Tracking
-------------------

    Worklog Id:     (was: 837251)
    Time Spent: 0.5h  (was: 20m)

> Make sure dirPath has the correct permissions
> ---------------------------------------------
>
>                 Key: HIVE-26887
>                 URL: https://issues.apache.org/jira/browse/HIVE-26887
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Zhang Dongsheng
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In the QueryResultsCache function of class QueryResultsCache, there is the 
> following code segment
> {code:java}
>   private QueryResultsCache(HiveConf configuration) throws IOException {
>     ......
>     FileSystem fs = cacheDirPath.getFileSystem(conf);
>     FsPermission fsPermission = new FsPermission("700");
>     fs.mkdirs(cacheDirPath, fsPermission);
>     ......
> }
> {code}
> It can be seen that the function will use the mkdirs to create cacheDirPath, 
> and the parameters passed in include the path variable cacheDirPath and a 
> permission 700. But we haven't confirmed whether the permission is correctly 
> assigned to the file.
> The above question is raised because there are two mkdir functions of hadoop, 
> {code:java}
> mkdirs(Path f, FsPermission permission)
> {code}
>  and 
> {code:java}
> mkdirs(FileSystem fs, Path dir, FsPermission permission)
> {code}
> and the first one is used here. The permissions of this function will be 
> affected by the underlying umask. Although 700 here will hardly be affected 
> by umask, but I think from a rigorous point of view, we should have one more 
> permission check and permission grant here.
> And I find same issue in other three methods here.
> In class Context
> {code:java}
> private Path getScratchDir(String scheme, String authority,
>       boolean mkdir, String scratchDir) {
>           ......
>           FileSystem fs = dirPath.getFileSystem(conf);
>           dirPath = new Path(fs.makeQualified(dirPath).toString());
>           FsPermission fsPermission = new FsPermission(scratchDirPermission);
>           if (!fs.mkdirs(dirPath, fsPermission)) {
>             throw new RuntimeException("Cannot make directory: "
>                 + dirPath.toString());
>           ......
>   }
> {code}
> In class SessionState
> {code:java}
>   static void createPath(HiveConf conf, Path path, String permission, boolean 
> isLocal,
>       boolean isCleanUp) throws IOException {
>     FsPermission fsPermission = new FsPermission(permission);
>     FileSystem fs;
>     ......
>     if (!fs.mkdirs(path, fsPermission)) {
>       throw new IOException("Failed to create directory " + path + " on fs " 
> + fs.getUri());
>     }
>     ......
>   }
> {code}
> and in class TezSessionState
> {code:java}
> private Path createTezDir(String sessionId, String suffix) throws IOException 
> {
>     ......
>     Path tezDir = new Path(hdfsScratchDir, TEZ_DIR);
>     FileSystem fs = tezDir.getFileSystem(conf);
>     FsPermission fsPermission = new FsPermission(HiveConf.getVar(conf, 
> HiveConf.ConfVars.SCRATCHDIRPERMISSION));
>     fs.mkdirs(tezDir, fsPermission);
>     ......
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to