[
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16551171#comment-16551171
]
ASF GitHub Bot commented on DRILL-5365:
---------------------------------------
ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r204148883
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystemCache.java
##########
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.dfs;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * <h4>Motivation</h4>
+ * <p>
+ * This cache is intended to work around the bugs in the {@link
org.apache.hadoop.fs.FileSystem} static cache (DRILL-5365). Specifically, as of
Hadoop 2.7.x the
+ * {@link org.apache.hadoop.fs.FileSystem} cache has the following bad
behavior:
+ * </p>
+ * <ul>
+ * <li>
+ * The {@link org.apache.hadoop.conf.Configuration} object is not
considered when constructing keys for the {@link
org.apache.hadoop.fs.FileSystem} cache of
+ * {@link org.apache.hadoop.fs.FileSystem} objects.
+ * </li>
+ * <li>
+ * The {@link org.apache.hadoop.fs.FileSystem} cache does not honor the
<b>fs.default.name</b> property when constructing keys, only
<b>fs.defaultFS</b> is used to construct
+ * keys in the cache.
+ * </li>
+ * </ul>
+ *
+ * <h4>Usage</h4>
+ *
+ * <ul>
+ * <li>
+ * A prerequisite for usage is that all {@link
org.apache.hadoop.conf.Configuration} objects are normalized with
+ * {@link
org.apache.drill.exec.store.dfs.DrillFileSystem#normalize(Configuration)}.
+ * </li>
+ * <li>
+ * This cache should only be used from {@link
org.apache.drill.exec.store.dfs.DrillFileSystem}.
+ * </li>
+ * </ul>
+ *
+ * <h4>TODO</h4>
+ *
+ * <ul>
+ * <li>
+ * Drill currently keeps a {@link
org.apache.drill.exec.store.dfs.DrillFileSystem} open indefinitely. This will
be corrected
+ * in DRILL-6608. As a result this cache currently has no methods to
remove {@link org.apache.hadoop.fs.FileSystem} objects
+ * after they are created.
+ * </li>
+ * </ul>
+ */
+class DrillFileSystemCache {
+ private Map<Map<String, String>, FileSystem> cache = new HashMap<>();
+
+ /**
+ * If a {@link org.apache.hadoop.fs.FileSystem} object corresponding to the
given {@link org.apache.hadoop.conf.Configuration}
+ * exists in the cache, then it is returned. If no corresponding {@link
org.apache.hadoop.fs.FileSystem} exist, then it is created,
+ * added to the cache, and returned.
+ * @param configuration The {@link org.apache.hadoop.conf.Configuration}
corresponding to the desired {@link org.apache.hadoop.fs.FileSystem}
+ * object. It is expected that this configuration is
first normalized with {@link
org.apache.drill.exec.store.dfs.DrillFileSystem#normalize(Configuration)}.
+ * @return The {@link org.apache.hadoop.fs.FileSystem} object corresponding
to he given {@link org.apache.hadoop.conf.Configuration}.
+ * @throws IOException An error when creating the desired {@link
org.apache.hadoop.fs.FileSystem} object.
+ */
+ protected synchronized FileSystem get(final Configuration configuration)
throws IOException {
+ final Map<String, String> map = new HashMap<>(configToMap(configuration));
+
+ if (!cache.containsKey(map)) {
+ cache.put(map, FileSystem.newInstance(configuration));
+ }
+
+ return cache.get(map);
+ }
+
+ static Map<String, String> configToMap(final Configuration configuration) {
+ Preconditions.checkNotNull(configuration);
+ final Map<String, String> map = new HashMap<>();
+
+ for (Map.Entry<String, String> entry: configuration) {
+ map.put(entry.getKey().trim(), entry.getValue());
Review comment:
- The code for CACHE.get calls the Key(uri, conf) constructor
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2667
. That constructor does not initialize the ugi object, so it is not used as
part of the key.
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2801
The UGI object is only used when CACHE.getUnique is called. In that method
the Key(uri, conf, unique) constructor is called
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2673
. Only in that case does the Key constructor call UGI.getCurrentUser() .
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2801
. So if not using the UGI object is a security issue, it is a pre-existing
issue. Please double check the code I referenced to make sure my analysis is
correct.
- You are right I had the deprecated and non-deprecated property names
mixed up. Thank you very much for catching this!
- While there is static conf field in the FileSystem class, no one seems to
use it. For example DistrubutedFileSystem calls the setConf method
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L137,
which sets a conf variable only for that instance of the class
https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configured.java
(DistributedFileSystem extends FileSystem which extends Configured). In fact
as far as I can tell from my IDE the static conf variable is never used.
Whenever the FileSystem class uses the configuration it calls getConf() which
is a method of Configured and which returns the configuration that is stored on
a per instance basis. So I think the configuration should still be used in the
key for caching.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> FileNotFoundException when reading a parquet file
> -------------------------------------------------
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Hive
> Affects Versions: 1.10.0
> Reporter: Chun Chang
> Assignee: Timothy Farkas
> Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin;
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file;
> 6) ctas from a large enough hive table as source to recreate the table/file;
> 7) query the table from node A should work; 8) query from node B as same user
> should reproduce the issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)