[ 
https://issues.apache.org/jira/browse/HIVE-26887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhang Dongsheng updated HIVE-26887:
-----------------------------------
    Description: 
In the QueryResultsCache function of class QueryResultsCache, there is the 
following code segment
{code:java}
  private QueryResultsCache(HiveConf configuration) throws IOException {
    ......
    FileSystem fs = cacheDirPath.getFileSystem(conf);
    FsPermission fsPermission = new FsPermission("700");
    fs.mkdirs(cacheDirPath, fsPermission);
    ......
}
{code}
It can be seen that the function will use the mkdirs to create cacheDirPath, 
and the parameters passed in include the path variable cacheDirPath and a 
permission 700. But we haven't confirmed whether the permission is correctly 
assigned to the file.

The above question is raised because there are two mkdir functions of hadoop, 
{code:java}
mkdirs(Path f, FsPermission permission)
{code}
 and 
{code:java}
mkdirs(FileSystem fs, Path dir, FsPermission permission)
{code}
and the first one is used here. The permissions of this function will be 
affected by the underlying umask. Although 700 here will hardly be affected by 
umask, but I think from a rigorous point of view, we should have one more 
permission check and permission grant here.

And I find same issue in other three methods here.
In class Context
{code:java}
private Path getScratchDir(String scheme, String authority,
      boolean mkdir, String scratchDir) {
          ......
          FileSystem fs = dirPath.getFileSystem(conf);
          dirPath = new Path(fs.makeQualified(dirPath).toString());
          FsPermission fsPermission = new FsPermission(scratchDirPermission);

          if (!fs.mkdirs(dirPath, fsPermission)) {
            throw new RuntimeException("Cannot make directory: "
                + dirPath.toString());
          ......
  }
{code}

In class SessionState
{code:java}
  static void createPath(HiveConf conf, Path path, String permission, boolean 
isLocal,
      boolean isCleanUp) throws IOException {
    FsPermission fsPermission = new FsPermission(permission);
    FileSystem fs;
    ......
    if (!fs.mkdirs(path, fsPermission)) {
      throw new IOException("Failed to create directory " + path + " on fs " + 
fs.getUri());
    }
    ......
  }
{code}
and in class TezSessionState
{code:java}
private Path createTezDir(String sessionId, String suffix) throws IOException {
    ......
    Path tezDir = new Path(hdfsScratchDir, TEZ_DIR);
    FileSystem fs = tezDir.getFileSystem(conf);
    FsPermission fsPermission = new FsPermission(HiveConf.getVar(conf, 
HiveConf.ConfVars.SCRATCHDIRPERMISSION));
    fs.mkdirs(tezDir, fsPermission);
    ......
  }
{code}

  was:
In the QueryResultsCache function of class QueryResultsCache, there is the 
following code segment
{code:java}
  private QueryResultsCache(HiveConf configuration) throws IOException {
    ......
    FileSystem fs = cacheDirPath.getFileSystem(conf);
    FsPermission fsPermission = new FsPermission("700");
    fs.mkdirs(cacheDirPath, fsPermission);
    ......
}
{code}
It can be seen that the function will use the mkdirs to create cacheDirPath, 
and the parameters passed in include the path variable cacheDirPath and a 
permission 700. But we haven't confirmed whether the permission is correctly 
assigned to the file.

The above question is raised because there are two mkdir functions of hadoop, 
{code:java}
mkdirs(Path f, FsPermission permission)
{code}
 and 
{code:java}
mkdirs(FileSystem fs, Path dir, FsPermission permission)
{code}
and the first one is used here. The permissions of this function will be 
affected by the underlying umask. Although 700 here will hardly be affected by 
umask, but I think from a rigorous point of view, we should have one more 
permission check and permission grant here.


> Make sure dirPath has the correct permissions
> ---------------------------------------------
>
>                 Key: HIVE-26887
>                 URL: https://issues.apache.org/jira/browse/HIVE-26887
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Zhang Dongsheng
>            Priority: Major
>
> In the QueryResultsCache function of class QueryResultsCache, there is the 
> following code segment
> {code:java}
>   private QueryResultsCache(HiveConf configuration) throws IOException {
>     ......
>     FileSystem fs = cacheDirPath.getFileSystem(conf);
>     FsPermission fsPermission = new FsPermission("700");
>     fs.mkdirs(cacheDirPath, fsPermission);
>     ......
> }
> {code}
> It can be seen that the function will use the mkdirs to create cacheDirPath, 
> and the parameters passed in include the path variable cacheDirPath and a 
> permission 700. But we haven't confirmed whether the permission is correctly 
> assigned to the file.
> The above question is raised because there are two mkdir functions of hadoop, 
> {code:java}
> mkdirs(Path f, FsPermission permission)
> {code}
>  and 
> {code:java}
> mkdirs(FileSystem fs, Path dir, FsPermission permission)
> {code}
> and the first one is used here. The permissions of this function will be 
> affected by the underlying umask. Although 700 here will hardly be affected 
> by umask, but I think from a rigorous point of view, we should have one more 
> permission check and permission grant here.
> And I find same issue in other three methods here.
> In class Context
> {code:java}
> private Path getScratchDir(String scheme, String authority,
>       boolean mkdir, String scratchDir) {
>           ......
>           FileSystem fs = dirPath.getFileSystem(conf);
>           dirPath = new Path(fs.makeQualified(dirPath).toString());
>           FsPermission fsPermission = new FsPermission(scratchDirPermission);
>           if (!fs.mkdirs(dirPath, fsPermission)) {
>             throw new RuntimeException("Cannot make directory: "
>                 + dirPath.toString());
>           ......
>   }
> {code}
> In class SessionState
> {code:java}
>   static void createPath(HiveConf conf, Path path, String permission, boolean 
> isLocal,
>       boolean isCleanUp) throws IOException {
>     FsPermission fsPermission = new FsPermission(permission);
>     FileSystem fs;
>     ......
>     if (!fs.mkdirs(path, fsPermission)) {
>       throw new IOException("Failed to create directory " + path + " on fs " 
> + fs.getUri());
>     }
>     ......
>   }
> {code}
> and in class TezSessionState
> {code:java}
> private Path createTezDir(String sessionId, String suffix) throws IOException 
> {
>     ......
>     Path tezDir = new Path(hdfsScratchDir, TEZ_DIR);
>     FileSystem fs = tezDir.getFileSystem(conf);
>     FsPermission fsPermission = new FsPermission(HiveConf.getVar(conf, 
> HiveConf.ConfVars.SCRATCHDIRPERMISSION));
>     fs.mkdirs(tezDir, fsPermission);
>     ......
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to