zabetak commented on a change in pull request #1959:
URL: https://github.com/apache/hive/pull/1959#discussion_r572850089
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelFactories.java
##########
@@ -143,40 +144,24 @@ public RelNode createFilter(RelNode child, RexNode
condition, Set<CorrelationId>
* Right input
* @param condition
* Join condition
- * @param joinType
- * Join type
- * @param variablesStopped
+ * @param variablesStoppedd
* Set of names of variables which are set by the LHS and used by
* the RHS and are not available to nodes above this JoinRel in
the
* tree
+ *@param joinType
+ * Join type
* @param semiJoinDone
* Whether this join has been translated to a semi-join
*/
@Override
- public RelNode createJoin(RelNode left, RelNode right, RexNode condition,
JoinRelType joinType,
- Set<String> variablesStopped, boolean semiJoinDone) {
+ public RelNode createJoin(RelNode left, RelNode right, List<RelHint>
hints, RexNode condition,
+ Set<CorrelationId> variablesStoppedd,
JoinRelType joinType, boolean semiJoinDone) {
Review comment:
Fix indendation?
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveTableScan.java
##########
@@ -268,7 +268,7 @@ public boolean isInsideView() {
// Also include partition list key to trigger cost evaluation even if an
// expression was already generated.
public String computeDigest() {
Review comment:
Is the method still used somewhere? Maybe now it is necessary to adapt
`explainTerms` method instead by adding more items. For instance:
`.itemIf("htColumns", this.neededColIndxsFrmReloptHT, pw.getDetailLevel() ==
SqlExplainLevel.DIGEST_ATTRIBUTES)`
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinConstraintsRule.java
##########
@@ -334,11 +335,13 @@ private void rewrite(final Mode mode, final RelNode
inputToKeep, final RelNode i
}
} else { // Mode.TRANSFORM
// Trigger transformation
+ List<RexNode> exps = new ArrayList<>();
+ project.accept(new ChildExpsRexShuttle(exps));
call.transformTo(call.builder()
.push(leftInput).push(rightInput)
.join(JoinRelType.INNER, join.getCondition())
.convert(call.rel(1).getRowType(), false) // Preserve nullability
- .project(project.getChildExps())
Review comment:
Why not `project.getProjects()` as in other places?
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdRowCount.java
##########
@@ -170,7 +170,7 @@ public Double getRowCount(Sort rel, RelMetadataQuery mq) {
@Override
public Double getRowCount(Filter rel, RelMetadataQuery mq) {
if (rel instanceof StatEnhancedHiveFilter) {
- return rel.getRows();
+ return 1.0D;
Review comment:
This seems wrong. It seems that we should rather do
`((StatEnhancedHiveFilter) rel).getRowCount()`.
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelFieldTrimmer.java
##########
@@ -555,7 +556,8 @@ private Aggregate rewriteGBConstantKeys(Aggregate
aggregate, ImmutableBitSet fie
RexBuilder rexBuilder = aggregate.getCluster().getRexBuilder();
final List<RexNode> newProjects = new ArrayList<>();
- final List<RexNode> inputExprs = input.getChildExps();
+ final List<RexNode> inputExprs = new ArrayList<>();
+ input.accept(new ChildExpsRexShuttle(inputExprs));
if (inputExprs == null || inputExprs.isEmpty()) {
return aggregate;
}
Review comment:
This is comment is not related with the changes but it is a bit more
general. The intention of this piece of code seems to be to find if all group
by expressions are constant. However, I am not sure if `getChildExps()` is the
right way to go. Most likely the code in `AggregateProjectPullUpConstantsRule`
is more appropriate for this.
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/ChildExpsRexShuttle.java
##########
@@ -0,0 +1,113 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite;
+
+import org.apache.calcite.rex.*;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+public class ChildExpsRexShuttle extends RexShuttle {
+ private final List<RexNode> exps;
+
+ public ChildExpsRexShuttle(List<RexNode> exps) {
+ this.exps = exps;
+ }
+
+ @Override
+ public RexNode visitOver(RexOver over) {
+ exps.add(over);
+ return super.visitOver(over);
Review comment:
Do not call `super.visitXx` here and methods below to avoid recursively
adding children of the input expression.
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
##########
@@ -74,25 +74,21 @@
HiveJoinProjectTransposeRule.LEFT_PROJECT,
HiveJoinProjectTransposeRule.RIGHT_PROJECT,
HiveProjectMergeRule.INSTANCE))
- .addRuleInstance(ProjectRemoveRule.INSTANCE)
+ .addRuleInstance(ProjectRemoveRule.Config.DEFAULT.toRule())
.addRuleInstance(HiveRootJoinProjectInsert.INSTANCE)
.build();
public static final MaterializedViewProjectFilterRule
INSTANCE_PROJECT_FILTER =
- new MaterializedViewProjectFilterRule(HiveRelFactories.HIVE_BUILDER,
- true, PROGRAM, false);
+ MaterializedViewProjectFilterRule.Config.DEFAULT.toRule();
Review comment:
I think we should pass the `HIVE_BUILDER` in the rule. Moreover we
should double-check that the other parameters of the rule
(`generateUnionRewritting`, `fastBailOut`, etc.) are set correctly. The comment
applies also to the rules below.
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveDruidRules.java
##########
@@ -59,40 +54,35 @@
*/
public class HiveDruidRules {
- public static final DruidFilterRule FILTER = new
DruidFilterRule(HiveRelFactories.HIVE_BUILDER);
+ public static final DruidFilterRule FILTER = (DruidFilterRule)
DruidFilterRule.Config.EMPTY
+ .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER).toRule();
- public static final DruidProjectRule PROJECT = new
DruidProjectRule(HiveRelFactories.HIVE_BUILDER);
+ public static final DruidProjectRule PROJECT = (DruidProjectRule)
DruidProjectRule.Config.EMPTY
+ .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER).toRule();
- public static final DruidAggregateRule AGGREGATE = new
DruidAggregateRule(HiveRelFactories.HIVE_BUILDER);
+ public static final DruidAggregateRule AGGREGATE = (DruidAggregateRule)
DruidAggregateRule.Config.EMPTY
+ .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER).toRule();
public static final DruidAggregateProjectRule AGGREGATE_PROJECT =
- new DruidAggregateProjectRule(HiveRelFactories.HIVE_BUILDER);
+ (DruidAggregateProjectRule) DruidAggregateProjectRule.Config.EMPTY.
+
withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER).toRule();
- public static final DruidSortRule SORT = new
DruidSortRule(HiveRelFactories.HIVE_BUILDER);
+ public static final DruidSortRule SORT = (DruidSortRule)
DruidSortRule.Config.EMPTY
+ .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER).toRule();
- public static final DruidSortProjectTransposeRule SORT_PROJECT_TRANSPOSE =
- new DruidSortProjectTransposeRule(HiveRelFactories.HIVE_BUILDER);
+ public static final SortProjectTransposeRule SORT_PROJECT_TRANSPOSE =
DruidRules.SORT_PROJECT_TRANSPOSE;
Review comment:
Shouldn't we pass the `HIVE_BUILDER` to this rule (and those that
follow)?
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
##########
@@ -17,13 +17,8 @@
*/
package org.apache.hadoop.hive.ql.optimizer.calcite;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
+import java.util.*;
Review comment:
I think that Hive coding conventions forbid `*` imports.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]