[jira] [Commented] (DRILL-8057) INFORMATION_SCHEMA filter push down is inefficient
[ https://issues.apache.org/jira/browse/DRILL-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450906#comment-17450906 ] ASF GitHub Bot commented on DRILL-8057: --- dzamo commented on a change in pull request #2388: URL: https://github.com/apache/drill/pull/2388#discussion_r758981355 ## File path: exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestInfoSchema.java ## @@ -64,13 +64,13 @@ public static void setupFiles() { @Test public void selectFromAllTables() throws Exception{ -test("select * from INFORMATION_SCHEMA.SCHEMATA"); -test("select * from INFORMATION_SCHEMA.CATALOGS"); -test("select * from INFORMATION_SCHEMA.VIEWS"); -test("select * from INFORMATION_SCHEMA.`TABLES`"); -test("select * from INFORMATION_SCHEMA.COLUMNS"); -test("select * from INFORMATION_SCHEMA.`FILES`"); -test("select * from INFORMATION_SCHEMA.`PARTITIONS`"); +//test("select * from INFORMATION_SCHEMA.SCHEMATA"); +//test("select * from INFORMATION_SCHEMA.CATALOGS"); +//test("select * from INFORMATION_SCHEMA.VIEWS"); +test("select * from INFORMATION_SCHEMA.`TABLES` where table_schema = 'cp.default'"); +//test("select * from INFORMATION_SCHEMA.COLUMNS"); +//test("select * from INFORMATION_SCHEMA.`FILES`"); +//test("select * from INFORMATION_SCHEMA.`PARTITIONS`"); Review comment: @vvysotskyi thanks, this was definitely not supposed to be included in what I pushed. Reverted this test, another new one is already present. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > INFORMATION_SCHEMA filter push down is inefficient > -- > > Key: DRILL-8057 > URL: https://issues.apache.org/jira/browse/DRILL-8057 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Information Schema >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: James Turton >Priority: Major > Fix For: 1.20.0 > > > WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from > fetching a schema hierarchy from all enabled storage configs. This results > in abysmal performance when unresponsive data sources are enabled, as > reported by users in the Apache Drill Slack channels. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8058: - Assignee: Vova Vysotskyi > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8058: --- Description: Checked in Drill embedded the query form _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test case: {code:java} SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; {code} But it gives the following error: {code:java} Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) {code} was: Checked in Drill embedded the query form the test case: SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; But it gives the following error: Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke >
[jira] [Commented] (DRILL-8057) INFORMATION_SCHEMA filter push down is inefficient
[ https://issues.apache.org/jira/browse/DRILL-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450715#comment-17450715 ] ASF GitHub Bot commented on DRILL-8057: --- vvysotskyi commented on a change in pull request #2388: URL: https://github.com/apache/drill/pull/2388#discussion_r758726205 ## File path: exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestInfoSchema.java ## @@ -64,13 +64,13 @@ public static void setupFiles() { @Test public void selectFromAllTables() throws Exception{ -test("select * from INFORMATION_SCHEMA.SCHEMATA"); -test("select * from INFORMATION_SCHEMA.CATALOGS"); -test("select * from INFORMATION_SCHEMA.VIEWS"); -test("select * from INFORMATION_SCHEMA.`TABLES`"); -test("select * from INFORMATION_SCHEMA.COLUMNS"); -test("select * from INFORMATION_SCHEMA.`FILES`"); -test("select * from INFORMATION_SCHEMA.`PARTITIONS`"); +//test("select * from INFORMATION_SCHEMA.SCHEMATA"); +//test("select * from INFORMATION_SCHEMA.CATALOGS"); +//test("select * from INFORMATION_SCHEMA.VIEWS"); +test("select * from INFORMATION_SCHEMA.`TABLES` where table_schema = 'cp.default'"); +//test("select * from INFORMATION_SCHEMA.COLUMNS"); +//test("select * from INFORMATION_SCHEMA.`FILES`"); +//test("select * from INFORMATION_SCHEMA.`PARTITIONS`"); Review comment: Please add a new test instead of changing the existing one. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > INFORMATION_SCHEMA filter push down is inefficient > -- > > Key: DRILL-8057 > URL: https://issues.apache.org/jira/browse/DRILL-8057 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Information Schema >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: James Turton >Priority: Major > Fix For: 1.20.0 > > > WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from > fetching a schema hierarchy from all enabled storage configs. This results > in abysmal performance when unresponsive data sources are enabled, as > reported by users in the Apache Drill Slack channels. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
Vitalii Diravka created DRILL-8058: -- Summary: NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null Key: DRILL-8058 URL: https://issues.apache.org/jira/browse/DRILL-8058 Project: Apache Drill Issue Type: Bug Components: Storage - Iceberg Affects Versions: 1.19.0 Reporter: Vitalii Diravka Fix For: Future Checked in Drill embedded the query form the test case: SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; But it gives the following error: Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8058: --- Labels: iceberg storage (was: ) > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form the test case: > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > But it gives the following error: > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8057) INFORMATION_SCHEMA filter push down is inefficient
[ https://issues.apache.org/jira/browse/DRILL-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450651#comment-17450651 ] ASF GitHub Bot commented on DRILL-8057: --- dzamo opened a new pull request #2388: URL: https://github.com/apache/drill/pull/2388 # [DRILL-8057](https://issues.apache.org/jira/browse/DRILL-8057): INFORMATION_SCHEMA filter push down is inefficient ## Description WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from fetching a schema hierarchy from all enabled storage configs. This results in abysmal performance when unresponsive data sources are enabled, as reported by users in the Apache Drill Slack channels. This PR teaches info schema to prune irrelevant schema subtrees from the search tree. ## Documentation No user-visible change ## Testing Existing info schema unit tests + one addition for IN operator -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > INFORMATION_SCHEMA filter push down is inefficient > -- > > Key: DRILL-8057 > URL: https://issues.apache.org/jira/browse/DRILL-8057 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Information Schema >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: James Turton >Priority: Major > Fix For: 1.20.0 > > > WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from > fetching a schema hierarchy from all enabled storage configs. This results > in abysmal performance when unresponsive data sources are enabled, as > reported by users in the Apache Drill Slack channels. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8057) INFORMATION_SCHEMA filter push down is inefficient
[ https://issues.apache.org/jira/browse/DRILL-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Turton reassigned DRILL-8057: --- Assignee: James Turton > INFORMATION_SCHEMA filter push down is inefficient > -- > > Key: DRILL-8057 > URL: https://issues.apache.org/jira/browse/DRILL-8057 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Information Schema >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: James Turton >Priority: Major > Fix For: 1.20.0 > > > WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from > fetching a schema hierarchy from all enabled storage configs. This results > in abysmal performance when unresponsive data sources are enabled, as > reported by users in the Apache Drill Slack channels. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-7863) Add Storage Plugin for Apache Phoenix
[ https://issues.apache.org/jira/browse/DRILL-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450562#comment-17450562 ] ASF GitHub Bot commented on DRILL-7863: --- luocooong commented on a change in pull request #2332: URL: https://github.com/apache/drill/pull/2332#discussion_r758509422 ## File path: contrib/storage-phoenix/src/main/resources/logback-test.xml.bak ## @@ -0,0 +1,49 @@ + + + Review comment: The original plan was to delete it before merging the PR, but now I have deleted it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add Storage Plugin for Apache Phoenix > - > > Key: DRILL-7863 > URL: https://issues.apache.org/jira/browse/DRILL-7863 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - Other >Reporter: Cong Luo >Assignee: Cong Luo >Priority: Major > > There is a to-do list : > # MVP on EVF. > # Security Authentication. > # Support both the thin(PQS) and fat(ZK) driver. > # Compatibility with phoenix 4.x and 5.x. > # Shaded dependencies. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-7863) Add Storage Plugin for Apache Phoenix
[ https://issues.apache.org/jira/browse/DRILL-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450555#comment-17450555 ] ASF GitHub Bot commented on DRILL-7863: --- luocooong commented on a change in pull request #2332: URL: https://github.com/apache/drill/pull/2332#discussion_r758505941 ## File path: contrib/storage-phoenix/src/main/java/org/apache/drill/exec/store/phoenix/PhoenixStoragePluginConfig.java ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.store.phoenix; + +import java.util.Collections; +import java.util.Map; +import java.util.Objects; + +import org.apache.commons.lang3.StringUtils; +import org.apache.drill.common.PlanStringBuilder; +import org.apache.drill.common.logical.AbstractSecuredStoragePluginConfig; +import org.apache.drill.common.logical.security.CredentialsProvider; +import org.apache.drill.exec.store.security.CredentialProviderUtils; +import org.apache.drill.exec.store.security.UsernamePasswordCredentials; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonTypeName; + +@JsonTypeName(PhoenixStoragePluginConfig.NAME) +public class PhoenixStoragePluginConfig extends AbstractSecuredStoragePluginConfig { + + public static final String NAME = "phoenix"; + public static final String THIN_DRIVER_CLASS = "org.apache.phoenix.queryserver.client.Driver"; + public static final String FAT_DRIVER_CLASS = "org.apache.phoenix.jdbc.PhoenixDriver"; + + private final String host; + private final int port; + private final String jdbcURL; // (options) Equal to host + port + private final Map props; // (options) See also http://phoenix.apache.org/tuning.html + + @JsonCreator + public PhoenixStoragePluginConfig( + @JsonProperty("host") String host, + @JsonProperty("port") int port, + @JsonProperty("username") String username, Review comment: As a side note, What is the difference between `opUserName` and `queryUserName`? ```java opUserName = scan.getUserName(); queryUserName = negotiator.context().getFragmentContext().getQueryUserName(); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add Storage Plugin for Apache Phoenix > - > > Key: DRILL-7863 > URL: https://issues.apache.org/jira/browse/DRILL-7863 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - Other >Reporter: Cong Luo >Assignee: Cong Luo >Priority: Major > > There is a to-do list : > # MVP on EVF. > # Security Authentication. > # Support both the thin(PQS) and fat(ZK) driver. > # Compatibility with phoenix 4.x and 5.x. > # Shaded dependencies. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8057) INFORMATION_SCHEMA filter push down is inefficient
James Turton created DRILL-8057: --- Summary: INFORMATION_SCHEMA filter push down is inefficient Key: DRILL-8057 URL: https://issues.apache.org/jira/browse/DRILL-8057 Project: Apache Drill Issue Type: Improvement Components: Storage - Information Schema Affects Versions: 1.19.0 Reporter: James Turton Fix For: 1.20.0 WHERE clauses in queries against INFORMATION_SCHEMA do not stop Drill from fetching a schema hierarchy from all enabled storage configs. This results in abysmal performance when unresponsive data sources are enabled, as reported by users in the Apache Drill Slack channels. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-7863) Add Storage Plugin for Apache Phoenix
[ https://issues.apache.org/jira/browse/DRILL-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450496#comment-17450496 ] ASF GitHub Bot commented on DRILL-7863: --- cgivre commented on a change in pull request #2332: URL: https://github.com/apache/drill/pull/2332#discussion_r758421339 ## File path: contrib/storage-phoenix/src/main/java/org/apache/drill/exec/store/phoenix/PhoenixStoragePluginConfig.java ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.store.phoenix; + +import java.util.Collections; +import java.util.Map; +import java.util.Objects; + +import org.apache.commons.lang3.StringUtils; +import org.apache.drill.common.PlanStringBuilder; +import org.apache.drill.common.logical.AbstractSecuredStoragePluginConfig; +import org.apache.drill.common.logical.security.CredentialsProvider; +import org.apache.drill.exec.store.security.CredentialProviderUtils; +import org.apache.drill.exec.store.security.UsernamePasswordCredentials; + +import com.fasterxml.jackson.annotation.JsonCreator; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonTypeName; + +@JsonTypeName(PhoenixStoragePluginConfig.NAME) +public class PhoenixStoragePluginConfig extends AbstractSecuredStoragePluginConfig { + + public static final String NAME = "phoenix"; + public static final String THIN_DRIVER_CLASS = "org.apache.phoenix.queryserver.client.Driver"; + public static final String FAT_DRIVER_CLASS = "org.apache.phoenix.jdbc.PhoenixDriver"; + + private final String host; + private final int port; + private final String jdbcURL; // (options) Equal to host + port + private final Map props; // (options) See also http://phoenix.apache.org/tuning.html + + @JsonCreator + public PhoenixStoragePluginConfig( + @JsonProperty("host") String host, + @JsonProperty("port") int port, + @JsonProperty("username") String username, + @JsonProperty("password") String password, + @JsonProperty("jdbcURL") String jdbcURL, + @JsonProperty("credentialsProvider") CredentialsProvider credentialsProvider, + @JsonProperty("props") Map props) { +super(CredentialProviderUtils.getCredentialsProvider(username, password, credentialsProvider), credentialsProvider == null); +this.host = host; +this.port = port == 0 ? 8765 : port; +this.jdbcURL = jdbcURL; +this.props = props == null ? Collections.emptyMap() : props; + } + + @JsonIgnore + public UsernamePasswordCredentials getUsernamePasswordCredentials() { +return new UsernamePasswordCredentials(credentialsProvider); + } + + @JsonProperty("host") + public String getHost() { +return host; + } + + @JsonProperty("port") + public int getPort() { +return port; + } + + @JsonProperty("username") + public String getUsername() { +if (directCredentials) { + return getUsernamePasswordCredentials().getUsername(); +} +return null; + } + + @JsonIgnore + @JsonProperty("password") + public String getPassword() { +if (directCredentials) { + return getUsernamePasswordCredentials().getPassword(); +} +return null; + } + + @JsonProperty("jdbcURL") + public String getJdbcURL() { +return jdbcURL; + } + + @JsonProperty("props") + public Map getProps() { +return props; + } + + @Override + public boolean equals(Object o) { +if (o == this) { + return true; +} +if (o == null || !(o instanceof PhoenixStoragePluginConfig)) { + return false; +} +PhoenixStoragePluginConfig config = (PhoenixStoragePluginConfig) o; +// URL first +if (StringUtils.isNotBlank(config.getJdbcURL())) { + return Objects.equals(this.jdbcURL, config.getJdbcURL()); +} +// Then the host and port +return Objects.equals(this.host, config.getHost()) && Objects.equals(this.port, config.getPort()); + } + + @Override + public int hashCode() { +if (StringUtils.isNotBlank(jdbcURL)) { + return Objects.hash(jdbcURL); +} +return Objects.hash(host, port); + } + + @Override +
[jira] [Commented] (DRILL-7863) Add Storage Plugin for Apache Phoenix
[ https://issues.apache.org/jira/browse/DRILL-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450247#comment-17450247 ] ASF GitHub Bot commented on DRILL-7863: --- paul-rogers commented on a change in pull request #2332: URL: https://github.com/apache/drill/pull/2332#discussion_r758097799 ## File path: contrib/storage-phoenix/src/main/java/org/apache/drill/exec/store/phoenix/PhoenixReader.java ## @@ -0,0 +1,463 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.store.phoenix; + +import java.math.BigDecimal; +import java.sql.Array; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; +import java.util.Arrays; +import java.util.Map; + +import org.apache.drill.common.types.TypeProtos.MinorType; +import org.apache.drill.exec.physical.resultSet.ResultSetLoader; +import org.apache.drill.exec.physical.resultSet.RowSetLoader; +import org.apache.drill.exec.record.metadata.SchemaBuilder; +import org.apache.drill.exec.vector.accessor.ColumnWriter; +import org.apache.drill.exec.vector.accessor.ScalarWriter; +import org.apache.drill.shaded.guava.com.google.common.collect.Maps; + +public class PhoenixReader { + + private final RowSetLoader writer; + private final ColumnDefn[] columns; + private final ResultSet results; + private long count; + + public PhoenixReader(ResultSetLoader loader, ColumnDefn[] columns, ResultSet results) { +this.writer = loader.writer(); +this.columns = columns; +this.results = results; + } + + public RowSetLoader getStorage() { +return writer; + } + + public long getCount() { +return count; + } + + /** + * Fetch and process one row. + * @return return true if one row is processed, return false if there is no next row. + * @throws SQLException + */ + public boolean processRow() throws SQLException { +if (results.next()) { + writer.start(); + for (int index = 0; index < columns.length; index++) { +if (columns[index].getSqlType() == Types.ARRAY) { + Array result = results.getArray(index + 1); + if (result != null) { +columns[index].load(result.getArray()); + } +} else { + Object result = results.getObject(index + 1); + if (result != null) { +columns[index].load(result); + } +} + } + count++; + writer.save(); + return true; +} +return false; + } + + protected static final Map COLUMN_TYPE_MAP = Maps.newHashMap(); + + static { +// text +COLUMN_TYPE_MAP.put(Types.VARCHAR, MinorType.VARCHAR); +COLUMN_TYPE_MAP.put(Types.CHAR, MinorType.VARCHAR); +// numbers +COLUMN_TYPE_MAP.put(Types.BIGINT, MinorType.BIGINT); +COLUMN_TYPE_MAP.put(Types.INTEGER, MinorType.INT); +COLUMN_TYPE_MAP.put(Types.SMALLINT, MinorType.INT); +COLUMN_TYPE_MAP.put(Types.TINYINT, MinorType.INT); +COLUMN_TYPE_MAP.put(Types.DOUBLE, MinorType.FLOAT8); +COLUMN_TYPE_MAP.put(Types.FLOAT, MinorType.FLOAT4); +COLUMN_TYPE_MAP.put(Types.DECIMAL, MinorType.VARDECIMAL); +// time +COLUMN_TYPE_MAP.put(Types.DATE, MinorType.DATE); +COLUMN_TYPE_MAP.put(Types.TIME, MinorType.TIME); +COLUMN_TYPE_MAP.put(Types.TIMESTAMP, MinorType.TIMESTAMP); +// binary +COLUMN_TYPE_MAP.put(Types.BINARY, MinorType.VARBINARY); // Raw fixed length byte array. Mapped to byte[]. +COLUMN_TYPE_MAP.put(Types.VARBINARY, MinorType.VARBINARY); // Raw variable length byte array. +// boolean +COLUMN_TYPE_MAP.put(Types.BOOLEAN, MinorType.BIT); + } + + protected abstract static class ColumnDefn { + +final String name; +final int index; +final int sqlType; +ColumnWriter writer; + +public String getName() { + return name; +} + +public int getIndex() { + return index; +} + +public int getSqlType() { + return sqlType; +} + +public ColumnDefn(String name, int index, int sqlType) { + this.name = name; + this.index = index; + this.sqlType = sqlType; +} + +