[ 
https://issues.apache.org/jira/browse/HIVE-25109?focusedWorklogId=599138&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599138
 ]

ASF GitHub Bot logged work on HIVE-25109:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/May/21 10:56
            Start Date: 19/May/21 10:56
    Worklog Time Spent: 10m 
      Work Description: kgyrtkirk commented on a change in pull request #2268:
URL: https://github.com/apache/hive/pull/2268#discussion_r635129376



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
##########
@@ -5031,7 +5038,7 @@ private RelNode genLogicalPlan(QB qb, boolean outerMostQB,
 
       // Build Rel for Constraint checks
       Pair<RelNode, RowResolver> constraintPair =
-          genConstraintFilterLogicalPlan(qb, srcRel, outerNameToPosMap, 
outerRR);
+          genConstraintFilterLogicalPlan(qb, selPair, outerNameToPosMap, 
outerRR);

Review comment:
       will this work okay when `selectRel == null`?
   previous code was passing `srcRel` which is optionally the previous `srcRel`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
##########
@@ -3475,15 +3475,22 @@ private RelNode genFilterLogicalPlan(QB qb, RelNode 
srcRel, ImmutableMap<String,
         return null;
       }
 
-      RowResolver inputRR = relToHiveRR.get(srcRel);
+      RowResolver inputRR = relToHiveRR.get(selPair.left);
       RexNode constraintUDF = RexNodeTypeCheck.genConstraintsExpr(
           conf, cluster.getRexBuilder(), getTargetTable(qb, dest), 
updating(dest), inputRR);
       if (constraintUDF == null) {
         return null;
       }
 
-      RelNode constraintRel = genFilterRelNode(constraintUDF, srcRel, 
outerNameToPosMap, outerRR);
-      return new Pair<>(constraintRel, inputRR);
+      RelNode constraintRel = genFilterRelNode(constraintUDF, selPair.left, 
outerNameToPosMap, outerRR);
+
+      List<RexNode> originalInputRefs = toRexNodeList(selPair.left);
+      List<RexNode> selectedRefs = Lists.newArrayList();
+      for (int index = 0; index < selPair.right.getColumnInfos().size(); 
index++) {
+        selectedRefs.add(originalInputRefs.get(index));
+      }

Review comment:
       I'm not sure about this; this block could be replaced with something 
like 
   ```
   
selectedRefs.addAll(originalInputRefs.sublist(selPair.right.getColumnInfos().size()))
   ```
   which looks odd to me because it would mean that the `selected` ones may 
only be a prefix of the original ones - is that true in every case?
   shouldn't this code be checking the `ref` of the `RexInputRefs` 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 599138)
    Time Spent: 20m  (was: 10m)

> CBO fails when updating table has constraints defined
> -----------------------------------------------------
>
>                 Key: HIVE-25109
>                 URL: https://issues.apache.org/jira/browse/HIVE-25109
>             Project: Hive
>          Issue Type: Bug
>          Components: CBO, Logical Optimizer
>            Reporter: Krisztian Kasa
>            Assignee: Krisztian Kasa
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> create table acid_uami_n0(i int,
>                  de decimal(5,2) constraint nn1 not null enforced,
>                  vc varchar(128) constraint ch2 CHECK (de >= cast(i as 
> decimal(5,2))) enforced)
>                  clustered by (i) into 2 buckets stored as orc TBLPROPERTIES 
> ('transactional'='true');
> -- update
> explain cbo
> update acid_uami_n0 set de = 893.14 where de = 103.00;
> {code}
> hive.log
> {code}
> 2021-05-13T06:08:05,547 ERROR [061f4d3b-9cbd-464f-80db-f0cd443dc3d7 main] 
> parse.UpdateDeleteSemanticAnalyzer: CBO failed, skipping CBO. 
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Result 
> Schema didn't match Optimized Op Tree Schema
>         at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.PlanModifierForASTConv.renameTopLevelSelectInResultSchema(PlanModifierForASTConv.java:217)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.PlanModifierForASTConv.convertOpTree(PlanModifierForASTConv.java:105)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:119)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1410)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:572)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12488)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:449)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyzeInternal(RewriteSemanticAnalyzer.java:67)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:316)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.reparseAndSuperAnalyze(UpdateDeleteSemanticAnalyzer.java:208)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeUpdate(UpdateDeleteSemanticAnalyzer.java:63)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyze(UpdateDeleteSemanticAnalyzer.java:53)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyzeInternal(RewriteSemanticAnalyzer.java:72)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:316)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:171)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:316)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:223) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:104) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:492) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:445) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:409) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:403) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:125)
>  [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:229) 
> [hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) 
> [hive-cli-4.0.0-SNAPSHOT.jar:?]
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:203) 
> [hive-cli-4.0.0-SNAPSHOT.jar:?]
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:129) 
> [hive-cli-4.0.0-SNAPSHOT.jar:?]
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) 
> [hive-cli-4.0.0-SNAPSHOT.jar:?]
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:355) 
> [hive-cli-4.0.0-SNAPSHOT.jar:?]
>         at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744) 
> [hive-it-util-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714) 
> [hive-it-util-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:170)
>  [hive-it-util-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157) 
> [hive-it-util-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62)
>  [test-classes/:?]
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.jar:4.13]
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.jar:4.13]
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.jar:4.13]
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.jar:4.13]
>         at 
> org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
>  [hive-it-util-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.jar:4.13]
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.jar:4.13]
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.jar:4.13]
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.jar:4.13]
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.jar:4.13]
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.jar:4.13]
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 
> [junit-4.13.jar:4.13]
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) 
> [junit-4.13.jar:4.13]
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) 
> [junit-4.13.jar:4.13]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to