[
https://issues.apache.org/jira/browse/DRILL-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17525454#comment-17525454
]
ASF GitHub Bot commented on DRILL-8155:
---------------------------------------
vvysotskyi commented on code in PR #2516:
URL: https://github.com/apache/drill/pull/2516#discussion_r854816913
##########
contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcConventionFactory.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.jdbc;
+
+import org.apache.calcite.sql.SqlDialect;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+public class JdbcConventionFactory {
+
+ private final Map<SqlDialect, DrillJdbcConvention> CACHE = new
ConcurrentHashMap<>();
Review Comment:
Shouldn't this cache invalidate its entries after some period of time to
avoid memory leaks?
##########
contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcWriterWithH2.java:
##########
@@ -437,6 +445,7 @@ public void testWithReallyLongFile() throws Exception {
}
@Test
+ @Ignore
Review Comment:
Is there any reason why it is ignored?
##########
contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcStoragePlugin.java:
##########
@@ -31,51 +34,89 @@
import org.apache.drill.common.AutoCloseables;
import org.apache.drill.common.exceptions.UserException;
import org.apache.drill.exec.ops.OptimizerRulesContext;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
import org.apache.drill.exec.server.DrillbitContext;
import org.apache.drill.exec.store.AbstractStoragePlugin;
import org.apache.drill.exec.store.SchemaConfig;
import org.apache.drill.exec.store.security.UsernamePasswordCredentials;
import
org.apache.drill.shaded.guava.com.google.common.annotations.VisibleForTesting;
+import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableSet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.sql.DataSource;
public class JdbcStoragePlugin extends AbstractStoragePlugin {
- private static final Logger logger =
LoggerFactory.getLogger(JdbcStoragePlugin.class);
+ static final Logger logger =
LoggerFactory.getLogger(JdbcStoragePlugin.class);
- private final JdbcStorageConfig config;
- private final HikariDataSource dataSource;
- private final SqlDialect dialect;
- private final DrillJdbcConvention convention;
- private final JdbcDialect jdbcDialect;
+ private final JdbcStorageConfig jdbcStorageConfig;
+ private final JdbcDialectFactory dialectFactory;
+ private final JdbcConventionFactory conventionFactory;
+ // DataSources for this storage config keyed on JDBC username
+ private final Map<String, HikariDataSource> dataSources = new
ConcurrentHashMap<>();
Review Comment:
The same regarding caching here.
##########
contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcScanBatchCreator.java:
##########
@@ -55,14 +57,22 @@ public CloseableRecordBatch
getBatch(ExecutorFragmentContext context,
}
}
- private ScanFrameworkBuilder createBuilder(OptionManager options,
JdbcSubScan subScan) {
- JdbcStorageConfig config = subScan.getConfig();
+ private ScanFrameworkBuilder createBuilder(ExecutorFragmentContext context,
JdbcSubScan subScan) {
ScanFrameworkBuilder builder = new ScanFrameworkBuilder();
builder.projection(subScan.getColumns());
builder.setUserName(subScan.getUserName());
JdbcStoragePlugin plugin = subScan.getPlugin();
- List<ManagedReader<SchemaNegotiator>> readers =
- Collections.singletonList(new JdbcBatchReader(plugin.getDataSource(),
subScan.getSql(), subScan.getColumns()));
+ UserCredentials userCreds =
context.getContextInformation().getQueryUserCredentials();
+ DataSource ds = plugin.getDataSource(userCreds)
+ .orElseThrow(() -> UserException.permissionError().message(
+ "Query user %s could obtain a connection to %s, missing credentials?",
Review Comment:
could -> could not
> Introduce new plugin authentication modes
> -----------------------------------------
>
> Key: DRILL-8155
> URL: https://issues.apache.org/jira/browse/DRILL-8155
> Project: Apache Drill
> Issue Type: Improvement
> Components: Security
> Affects Versions: 1.20.0
> Reporter: Charles Givre
> Assignee: Charles Givre
> Priority: Major
> Fix For: Future
>
>
> At present, Drill storage plugins can use a shared set of credentials to
> access storage on behalf of Drill users or, in a subset of cases belonging to
> the broader Hadoop family, they can impersonate the Drill user when
> drill.exec.impersonation.enabled = true. An important but missing auth mode
> is [what is termed "user translation" in
> Trino|[https://docs.starburst.io/latest/security/impersonation.html].] Under
> user translation, the active Drill user is translated to a user known to the
> external storage by means of a translation table that associates Drill users
> with their credentials for the external storage. No support for user
> impersonation in the external storage is required in this mode. This ticket
> proposes that we add establish a design pattern that adds support for this
> auth mode to Drill storage plugins.
> Another present day limitation is that impersonation, for the plugins that
> support it, is toggled by a global switch. We propose here that the auth
> mode chosen for a plugin should be independent of the auth modes chosen for
> other plugins, by a move of this option into their respective storage configs.
> Finally, while a standardised means of choosing an authentication mode is
> desired, note that not every storage plugin needs to, or can, support every
> mode.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)