[
https://issues.apache.org/jira/browse/HIVE-27658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17791647#comment-17791647
]
Stamatis Zampetakis commented on HIVE-27658:
--------------------------------------------
For convenience, the query in [^hive27658.q] is presented below.
{code:sql}
SELECT TC.CONST_DATE, TB.PRODUCT_SK
FROM TABLE_A TA
INNER JOIN (SELECT TO_DATE(FROM_UNIXTIME(1701088643)) AS CONST_DATE) TC
ON TA.START_DATE = TC.CONST_DATE
INNER JOIN TABLE_B TB
ON TB.START_DATE = TC.CONST_DATE AND TA.PRODUCT_ID = TB.PRODUCT_ID;{code}
The operator tree before hitting the error described above is depicted in
[^hive27658-query-plan.pdf]. The problem is triggered while trying to convert
the JOIN[14] operator into a dynamic partition hash join.
The dynamic partition hash join is the first operator that appears in a Reduce
Operator Tree and the plan usually looks like the snippet below.
{noformat}
Reduce Operator Tree:
Map Join Operator
condition map:
Inner Join 0 to 1
keys:
0 KEY.reducesinkkey0 (type: bigint), KEY.reducesinkkey1 (type: bigint)
1 KEY.reducesinkkey0 (type: bigint), KEY.reducesinkkey1 (type: bigint)
outputColumnNames: _col2, _col5, _col7, _col9, _col15
input vertices:
0 Map 1
Statistics: Num rows: 1178531569624 Data size: 414666308907440 Basic stats:
COMPLETE Column stats: COMPLETE
DynamicPartitionHashJoin: true
{noformat}
Observe that the column names for the keys are using the "KEY.reducesinkkey"
prefix and the implementation depends on this naming convention for working
properly.
The {{ExprNodeDescUtils#resolveJoinKeysAsRSColumns}} method tries to map the
join keys to the output column names from the RS operators using the
{{columnExprMap}} mapping that is present on the RS operators. In this case, it
fails to resolve the "0 _col1 (type:int), _col2 (type: date)" keys from the
RS[12] operator because "_col2 (type: date)" is not present in the
{{columnExprMap}} of RS[12] thus it returns {{null}} and triggers the
aforementioned exception.
Looking closer to RS[12] we can see that there is a slight inconsistency
between the columnExprMap and the key expressions. The KEY.reducesinkkey1
denotes that the input expression is a constant date but this is not reflected
in the key expressions where _col2 is an expression referencing a column. It
appears the constant propagation logic did not trigger completely for RS[12]
leaving it to some intermediary state that is problematic for the dynamic
partition hash join conversion.
> Error resolving join keys during conversion to dynamic partition hashjoin
> -------------------------------------------------------------------------
>
> Key: HIVE-27658
> URL: https://issues.apache.org/jira/browse/HIVE-27658
> Project: Hive
> Issue Type: Bug
> Components: Query Planning
> Affects Versions: 3.1.3, 4.0.0-beta-1
> Reporter: xiaojunxiang
> Priority: Major
> Attachments: hive27658-query-plan.pdf, hive27658.q
>
>
> In certain cases the compilation of queries fail during the conversion to a
> dynamic partition hash join with the stacktrace similar to the one shown
> below.
> {noformat}
> 2023-08-31T10:22:21,738 WARN [HiveServer2-Handler-Pool: Thread-100]:
> thrift.ThriftCLIService (()) - Error executing statement:
> org.apache.hive.service.cli.HiveSQLException: Error while compiling
> statement: FAILED: SemanticException Error resolving join keys
> at
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:260)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:247)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:541)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:527)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:312)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:562)
> ~[hive-service-100.jar:?]
> at
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542)
> ~[hive-exec-100.jar:?]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> ~[hive-exec-100.jar:?]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:647)
> ~[hive-exec-100.jar:?]
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> ~[hive-exec-100.jar:?]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ~[?:1.8.0_312]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ~[?:1.8.0_312]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_312]
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Error resolving
> join keys
> at
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.getMapJoinDesc(MapJoinProcessor.java:1105)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.convertJoinOpMapJoinOp(MapJoinProcessor.java:372)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.convertJoinMapJoin(ConvertJoinMapJoin.java:1056)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.convertJoinDynamicPartitionedHashJoin(ConvertJoinMapJoin.java:1280)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.fallbackToReduceSideJoin(ConvertJoinMapJoin.java:1312)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.checkAndConvertSMBJoin(ConvertJoinMapJoin.java:371)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.process(ConvertJoinMapJoin.java:151)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:447)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:160)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:144)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12320)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:164)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
> ~[hive-exec-100.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:659)
> ~[hive-exec-100.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
> ~[hive-exec-100.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
> ~[hive-exec-100.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> ~[hive-exec-100.jar:?]
> at
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ~[hive-service-100.jar:?]
> ... 15 more
> 2023-08-31T10:22:33,838 INFO
> [org.apache.ranger.audit.queue.AuditBatchQueue0]: provider.BaseAuditHandler
> (())
> {noformat}
> The problem was originally reported for a query with a LEFT SEMI JOIN and the
> scenario is outlined below.
> {code:sql}
> create database test_condition;
> use test_condition;
> create external table to_szyy_user_right_issue_log_df(flow_no_ string,
> activity_code_ string, right_id_ string, user_id_ string,issue_flag_ string)
> partitioned by (ds string)
> STORED AS parquet TBLPROPERTIES('parquet.compress'='SNAPPY');
> create external table to_t0111_s62t1_cst_prft_df(dccp_stcd
> string,dccp_ordr_ar_id string) partitioned by (ds string)
> STORED AS parquet TBLPROPERTIES('parquet.compress'='SNAPPY');
> alter table to_szyy_user_right_issue_log_df add partition(ds='2023-08-24');
> alter table to_t0111_s62t1_cst_prft_df add partition(ds='2023-08-24');
> alter table to_szyy_user_right_issue_log_df partition(ds='2023-08-24') update
> statistics set('numRows'='8146725','rawDataSize'='46331126445');
> alter table to_t0111_s62t1_cst_prft_df partition(ds='2023-08-24') update
> statistics set('numRows'='15680439','rawDataSize'='56180088521');
> set hive.auto.convert.join.noconditionaltask.size=8153960755
> set hive.auto.convert.join=true;
> set hive.optimize.dynamic.partition.hashjoin=true;
> set hive.stats.fetch.column.stats=false;
> set hive.cbo.enable=true;
> explain
> select flow_no_, activity_code_, right_id_, user_id_
> from test_condition.to_szyy_user_right_issue_log_df rlog
> left semi join test_condition.to_t0111_s62t1_cst_prft_df prft on prft.ds =
> '2023-08-24' and rlog.flow_no_ = prft.dccp_ordr_ar_id
> group by flow_no_, activity_code_, right_id_, user_id_;
> {code}
> The {{SemanticException}} reported above is thrown by the [dynamic partition
> hashjoin transformation
> logic|https://github.com/apache/hive/blob/9b4ea7affa4902fc2849f1a88b68103940fc9866/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java#L1590]
> of so a workaround consists in disabling the respective optimization via the
> {{hive.optimize.dynamic.partition.hashjoin}} property.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)