[
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942091#comment-15942091
]
ASF GitHub Bot commented on DRILL-5365:
---------------------------------------
Github user paul-rogers commented on a diff in the pull request:
https://github.com/apache/drill/pull/796#discussion_r108049239
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
---
@@ -89,22 +89,36 @@ public DrillFileSystem(Configuration fsConf) throws
IOException {
}
public DrillFileSystem(Configuration fsConf, OperatorStats
operatorStats) throws IOException {
- this.underlyingFs = FileSystem.get(fsConf);
+ this(fsConf, URI.create(fsConf.getRaw(FS_DEFAULT_NAME_KEY)),
operatorStats);
--- End diff --
Actually, why do we need to do this step? The original code seems to do the
right thing:
```
class FileSystem ...
public static FileSystem get(Configuration conf) throws IOException {
return get(getDefaultUri(conf), conf);
}
public static URI getDefaultUri(Configuration conf) {
return URI.create(fixName(conf.get(FS_DEFAULT_NAME_KEY, DEFAULT_FS)));
}
```
That is, the original code gets the file system using the URI stored in the
config. Standard practice is that the caller must have set the file system
property: that is how we tell a "file:///" system from an HDFS system, etc.
So, isn't the problem here with the caller?
> FileNotFoundException when reading a parquet file
> -------------------------------------------------
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Hive
> Affects Versions: 1.10.0
> Reporter: Chun Chang
> Assignee: Chunhui Shi
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin;
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file;
> 6) ctas from a large enough hive table as source to recreate the table/file;
> 7) query the table from node A should work; 8) query from node B as same user
> should reproduce the issue.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)