[
https://issues.apache.org/jira/browse/HIVE-26192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17781133#comment-17781133
]
Naveen Gangam commented on HIVE-26192:
--------------------------------------
[~zhangbutao] A part of this fix seems to be causing issues with postgres.
The Hiveserver2 side support for JDBC storage handler uses a select query to
fetch the table/column metadata. something like this.
select * from <tbl_name> limit 1; --> where the tbl_name is
schemaName.tableName where schemaName is the value if hive.sql.schema on the
table.
https://github.com/apache/hive/blob/master/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessor.java#L125-L129
https://github.com/apache/hive/blob/master/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessor.java#L557-L563
So natively in Postgres, the following happens from PSQL (same is true for
oracle where database and schemas are different, on MySQL seems to treat them
as one and same)
{noformat}
hive_hms_testing=> select * from public."TXNS" limit 1;
TXN_ID | TXN_STATE | TXN_STARTED | TXN_LAST_HEARTBEAT | TXN_USER | TXN_HOST |
TXN_AGENT_INFO | TXN_META_INFO | TXN_HEARTBEAT_COUNT | TXN_TYPE
--------+-----------+-------------+--------------------+----------+----------+----------------+---------------+---------------------+----------
(0 rows)
hive_hms_testing=> select * from "TXNS" limit 1;
TXN_ID | TXN_STATE | TXN_STARTED | TXN_LAST_HEARTBEAT | TXN_USER | TXN_HOST |
TXN_AGENT_INFO | TXN_META_INFO | TXN_HEARTBEAT_COUNT | TXN_TYPE
--------+-----------+-------------+--------------------+----------+----------+----------------+---------------+---------------------+----------
(0 rows)
hive_hms_testing=> select * from hive_hms_testing."TXNS" limit 1;
ERROR: relation "hive_hms_testing.TXNS" does not exist
LINE 1: select * from hive_hms_testing."TXNS" limit 1;
^
hive_hms_testing=>
{noformat}
so schemaname cannot be the name of the database. if a schema name is not
specified, then all tables in the database, across all schemas are listed. But
if user wants to limit to a certain schema, they have to use the schemaname in
"connector.remoteDbName" which then needs to be used as "hive.sql.schema" for
the table.
So I am thinking we should only set when the remoteDbName is not null and flip
the value of getCatalogName() and getDatabaseName() in the PG connector.
{code:java}
table.getParameters().put(JDBC_SCHEMA, scoped_db);
{code}
> JDBC data connector queries occur exception at cbo stage
> ---------------------------------------------------------
>
> Key: HIVE-26192
> URL: https://issues.apache.org/jira/browse/HIVE-26192
> Project: Hive
> Issue Type: Bug
> Affects Versions: 4.0.0-alpha-2
> Reporter: zhangbutao
> Assignee: zhangbutao
> Priority: Major
> Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> If you do a select query qtest with jdbc data connector, you will see
> exception at cbo stage:
> {code:java}
> [ERROR] Failures:
> [ERROR] TestMiniLlapCliDriver.testCliDriver:62 Client execution failed with
> error code = 40000
> running
> select * from country
> fname=dataconnector_mysql.qSee ./ql/target/tmp/log/hive.log or
> ./itests/qtest/target/tmp/log/hive.log, or check ./ql/target/surefire-reports
> or ./itests/qtest/target/surefire-reports/ for specific test cases logs.
> org.apache.hadoop.hive.ql.parse.SemanticException: Table qtestDB.country was
> not found in the database
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genTableLogicalPlan(CalcitePlanner.java:3078)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:5048)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1665)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1605)
> at
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131)
> at
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
> at
> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180)
> at
> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1357)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:567)
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12587)
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:460)
> at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:317)
> at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224)
> at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:106)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:500)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:452)
> at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:416)
> at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:410)
> at
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121)
> at
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:227)
> at
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
> at
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:200)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:126)
> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:421)
> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:352)
> at
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:727)
> at
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:697)
> at
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:114)
> at
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
> at
> org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver(TestMiniLlapCliDriver.java:62)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
> org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> at
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> at
> org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:95)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> at
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)