jcamachor commented on a change in pull request #2401:
URL: https://github.com/apache/hive/pull/2401#discussion_r657596794



##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -261,41 +263,61 @@ protected RelNode 
applyMaterializedViewRewriting(RelOptPlanner planner, RelNode
         if (materialization.isSourceTablesCompacted()) {
           return calcitePreMVRewritingPlan;
         }
-        // First we need to check if it is valid to convert to MERGE/INSERT 
INTO.
-        // If we succeed, we modify the plan and afterwards the AST.
-        // MV should be an acid table.
-        MaterializedViewRewritingRelVisitor visitor = new 
MaterializedViewRewritingRelVisitor();
-        visitor.go(basePlan);
-        if (visitor.isRewritingAllowed()) {
-          if (materialization.isSourceTablesUpdateDeleteModified()) {
-            if (visitor.isContainsAggregate()) {
-              if (visitor.getCountIndex() < 0) {
-                // count(*) is necessary for determine which rows should be 
deleted from the view
-                // if view definition does not have it incremental rebuild can 
not be performed, bail out
-                return calcitePreMVRewritingPlan;
-              }
-              return toAggregateInsertDeleteIncremental(basePlan, mdProvider, 
executorProvider);
-            } else {
-              return toJoinInsertDeleteIncremental(
-                      basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan);
-            }
-          } else {
-            // Trigger rewriting to remove UNION branch with MV
-            if (visitor.isContainsAggregate()) {
-              return toAggregateInsertIncremental(basePlan, mdProvider, 
executorProvider, optCluster, calcitePreMVRewritingPlan);
-            } else {
-              return toJoinInsertIncremental(basePlan, mdProvider, 
executorProvider);
-            }
-          }
-        } else if (materialization.isSourceTablesUpdateDeleteModified()) {
-          return calcitePreMVRewritingPlan;
+
+        RelNode incrementalRebuildPlan = toIncrementalRebuild(
+                basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan, materialization);
+        if (mvRebuildMode != 
MaterializationRebuildMode.INSERT_OVERWRITE_REBUILD) {
+          return incrementalRebuildPlan;
         }
+
+        return toPartitionInsertOverwrite(

Review comment:
       `toPartitionInsertOverwrite ` -> `toPartitionIncrementalRebuildPlan` ?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -353,6 +375,23 @@ private RelNode toJoinInsertIncremental(
               basePlan, mdProvider, executorProvider, 
HiveJoinInsertIncrementalRewritingRule.INSTANCE);
     }
 
+    private RelNode toPartitionInsertOverwrite(
+            RelNode basePlan, RelMetadataProvider mdProvider, RexExecutor 
executorProvider,
+            HiveRelOptMaterialization materialization, RelNode 
calcitePreMVRewritingPlan) {
+
+      if (materialization.isSourceTablesUpdateDeleteModified()) {
+        return calcitePreMVRewritingPlan;
+      }
+
+      RelOptHiveTable hiveTable = (RelOptHiveTable) 
materialization.tableRel.getTable();
+      if (!AcidUtils.isInsertOnlyTable(hiveTable.getHiveTableMD())) {
+        return applyPreJoinOrderingTransforms(basePlan, mdProvider, 
executorProvider);
+      }
+
+      return toIncrementalRebuild(
+              basePlan, mdProvider, executorProvider, 
HiveAggregatePartitionIncrementalRewritingRule.INSTANCE);
+    }
+
     private RelNode toIncrementalRebuild(

Review comment:
       `toIncrementalRebuild` -> `applyIncrementalRebuildRule`? As you can see, 
I am just trying to revise some of these method names so their purpose becomes 
more clear. If you have other ideas, please go with them.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -353,6 +375,23 @@ private RelNode toJoinInsertIncremental(
               basePlan, mdProvider, executorProvider, 
HiveJoinInsertIncrementalRewritingRule.INSTANCE);
     }
 
+    private RelNode toPartitionInsertOverwrite(
+            RelNode basePlan, RelMetadataProvider mdProvider, RexExecutor 
executorProvider,
+            HiveRelOptMaterialization materialization, RelNode 
calcitePreMVRewritingPlan) {
+
+      if (materialization.isSourceTablesUpdateDeleteModified()) {
+        return calcitePreMVRewritingPlan;
+      }
+
+      RelOptHiveTable hiveTable = (RelOptHiveTable) 
materialization.tableRel.getTable();
+      if (!AcidUtils.isInsertOnlyTable(hiveTable.getHiveTableMD())) {
+        return applyPreJoinOrderingTransforms(basePlan, mdProvider, 
executorProvider);
+      }
+
+      return toIncrementalRebuild(
+              basePlan, mdProvider, executorProvider, 
HiveAggregatePartitionIncrementalRewritingRule.INSTANCE);

Review comment:
       I think having another variant of this rule for non-aggregate MVs would 
be useful too?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveAggregatePartitionIncrementalRewritingRule.java
##########
@@ -0,0 +1,152 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules.views;/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import com.google.common.collect.ImmutableList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexTableInputRef;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+
+import static 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveCalciteUtil.findRexTableInputRefs;
+
+/**
+ * Rule to prepare the plan for incremental view maintenance if the view is 
partitioned and insert only:
+ * Insert overwrite the partitions which are affected since the last rebuild 
only and leave the
+ * rest of the partitions intact.
+ *
+ * Assume that we have a materialized view partitioned on column a and writeId 
was 1 at the last rebuild:
+ *
+ * CREATE MATERIALIZED VIEW mat1 PARTITIONED ON (a) STORED AS ORC 
TBLPROPERTIES ("transactional"="true", 
"transactional_properties"="insert_only") AS
+ * SELECT a, b, sum(c) sumc FROM t1 GROUP BY b, a;
+ *
+ * 1. Query all rows from source tables since the last rebuild.
+ * 2. Query all rows from MV which are in any of the partitions queried in 1.
+ * 3. Take the union of rows from 1. and 2. and perform the same aggregations 
defined in the MV
+ *
+ * SELECT b, sum(sumc), a FROM (
+ *     SELECT b, sumc, a FROM mat1
+ *     LEFT SEMI JOIN (SELECT b, sum(c), a FROM t1 WHERE ROW__ID.writeId > 1 
GROUP BY b, a) q ON (mat1.a <=> q.a)
+ *     UNION ALL
+ *     SELECT b, sum(c) sumc, a FROM t1 WHERE ROW__ID.writeId > 1 GROUP BY b, a
+ * ) sub
+ * GROUP BY a, b
+ */
+public class HiveAggregatePartitionIncrementalRewritingRule extends RelOptRule 
{
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregatePartitionIncrementalRewritingRule.class);
+
+  public static final HiveAggregatePartitionIncrementalRewritingRule INSTANCE =
+          new HiveAggregatePartitionIncrementalRewritingRule();
+
+  private HiveAggregatePartitionIncrementalRewritingRule() {
+    super(operand(Aggregate.class, operand(Union.class, any())),
+            HiveRelFactories.HIVE_BUILDER, 
"HiveJoinPartitionIncrementalRewritingRule");
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+    RexBuilder rexBuilder = call.builder().getRexBuilder();
+
+    final Aggregate aggregate = call.rel(0);
+    final Union union = call.rel(1);
+    final RelNode queryBranch = union.getInput(0);
+    final RelNode mvBranch = union.getInput(1);
+
+    // find Partition col indexes in mvBranch top operator row schema
+    // mvBranch can be more complex than just a TS on the MV and the partition 
columns indexes in the top Operator's
+    // row schema may differ from the one in the TS row schema. Example:
+    // Project($2, $0, $1)
+    //   TableScan(table=materialized_view1, schema=a, b, part_col)
+    RelMetadataQuery relMetadataQuery = RelMetadataQuery.instance();
+    int partitionColumnCount = -1;
+    List<Integer> partitionColumnIndexes = new ArrayList<>();
+    for (int i = 0; i < mvBranch.getRowType().getFieldList().size(); ++i) {
+      RelDataTypeField relDataTypeField = 
mvBranch.getRowType().getFieldList().get(i);
+      RexInputRef inputRef = 
rexBuilder.makeInputRef(relDataTypeField.getType(), i);
+
+      Set<RexNode> expressionLineage = 
relMetadataQuery.getExpressionLineage(mvBranch, inputRef);
+      if (expressionLineage == null || expressionLineage.size() != 1) {
+        continue;
+      }
+
+      Set<RexTableInputRef> tableInputRefs = 
findRexTableInputRefs(expressionLineage.iterator().next());
+      if (tableInputRefs.size() != 1) {
+        continue;
+      }
+
+      RexTableInputRef tableInputRef = tableInputRefs.iterator().next();
+      RelOptHiveTable relOptHiveTable = (RelOptHiveTable) 
tableInputRef.getTableRef().getTable();
+      if (!(relOptHiveTable.getHiveTableMD().isMaterializedView())) {
+        continue;

Review comment:
       I believe this should never happen? Since this would not be safe, if it 
does, should you log a message and bail out completely from the rule rather 
than `continue`?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveAggregatePartitionIncrementalRewritingRule.java
##########
@@ -0,0 +1,152 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules.views;/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import com.google.common.collect.ImmutableList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexTableInputRef;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+
+import static 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveCalciteUtil.findRexTableInputRefs;
+
+/**
+ * Rule to prepare the plan for incremental view maintenance if the view is 
partitioned and insert only:
+ * Insert overwrite the partitions which are affected since the last rebuild 
only and leave the
+ * rest of the partitions intact.
+ *
+ * Assume that we have a materialized view partitioned on column a and writeId 
was 1 at the last rebuild:
+ *
+ * CREATE MATERIALIZED VIEW mat1 PARTITIONED ON (a) STORED AS ORC 
TBLPROPERTIES ("transactional"="true", 
"transactional_properties"="insert_only") AS
+ * SELECT a, b, sum(c) sumc FROM t1 GROUP BY b, a;
+ *
+ * 1. Query all rows from source tables since the last rebuild.
+ * 2. Query all rows from MV which are in any of the partitions queried in 1.
+ * 3. Take the union of rows from 1. and 2. and perform the same aggregations 
defined in the MV
+ *
+ * SELECT b, sum(sumc), a FROM (
+ *     SELECT b, sumc, a FROM mat1
+ *     LEFT SEMI JOIN (SELECT b, sum(c), a FROM t1 WHERE ROW__ID.writeId > 1 
GROUP BY b, a) q ON (mat1.a <=> q.a)
+ *     UNION ALL
+ *     SELECT b, sum(c) sumc, a FROM t1 WHERE ROW__ID.writeId > 1 GROUP BY b, a
+ * ) sub
+ * GROUP BY a, b
+ */
+public class HiveAggregatePartitionIncrementalRewritingRule extends RelOptRule 
{
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregatePartitionIncrementalRewritingRule.class);
+
+  public static final HiveAggregatePartitionIncrementalRewritingRule INSTANCE =
+          new HiveAggregatePartitionIncrementalRewritingRule();
+
+  private HiveAggregatePartitionIncrementalRewritingRule() {
+    super(operand(Aggregate.class, operand(Union.class, any())),
+            HiveRelFactories.HIVE_BUILDER, 
"HiveJoinPartitionIncrementalRewritingRule");
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+    RexBuilder rexBuilder = call.builder().getRexBuilder();
+
+    final Aggregate aggregate = call.rel(0);
+    final Union union = call.rel(1);
+    final RelNode queryBranch = union.getInput(0);
+    final RelNode mvBranch = union.getInput(1);
+
+    // find Partition col indexes in mvBranch top operator row schema
+    // mvBranch can be more complex than just a TS on the MV and the partition 
columns indexes in the top Operator's
+    // row schema may differ from the one in the TS row schema. Example:
+    // Project($2, $0, $1)
+    //   TableScan(table=materialized_view1, schema=a, b, part_col)
+    RelMetadataQuery relMetadataQuery = RelMetadataQuery.instance();
+    int partitionColumnCount = -1;
+    List<Integer> partitionColumnIndexes = new ArrayList<>();
+    for (int i = 0; i < mvBranch.getRowType().getFieldList().size(); ++i) {
+      RelDataTypeField relDataTypeField = 
mvBranch.getRowType().getFieldList().get(i);
+      RexInputRef inputRef = 
rexBuilder.makeInputRef(relDataTypeField.getType(), i);
+
+      Set<RexNode> expressionLineage = 
relMetadataQuery.getExpressionLineage(mvBranch, inputRef);
+      if (expressionLineage == null || expressionLineage.size() != 1) {
+        continue;
+      }
+
+      Set<RexTableInputRef> tableInputRefs = 
findRexTableInputRefs(expressionLineage.iterator().next());
+      if (tableInputRefs.size() != 1) {
+        continue;
+      }
+
+      RexTableInputRef tableInputRef = tableInputRefs.iterator().next();
+      RelOptHiveTable relOptHiveTable = (RelOptHiveTable) 
tableInputRef.getTableRef().getTable();
+      if (!(relOptHiveTable.getHiveTableMD().isMaterializedView())) {
+        continue;
+      }
+
+      partitionColumnCount = relOptHiveTable.getPartColInfoMap().size();
+      if 
(relOptHiveTable.getPartColInfoMap().containsKey(tableInputRef.getIndex())) {
+        partitionColumnIndexes.add(i);
+      }
+    }
+
+    if (partitionColumnCount == 0 || partitionColumnIndexes.size() != 
partitionColumnCount) {
+      LOG.debug("Could not found all partition column lineages, bail out.");

Review comment:
       nit. `found` -> `find`.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveAggregatePartitionIncrementalRewritingRule.java
##########
@@ -0,0 +1,152 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules.views;/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import com.google.common.collect.ImmutableList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexTableInputRef;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+
+import static 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveCalciteUtil.findRexTableInputRefs;

Review comment:
       Probably not needed (see comment above).

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -353,6 +375,23 @@ private RelNode toJoinInsertIncremental(
               basePlan, mdProvider, executorProvider, 
HiveJoinInsertIncrementalRewritingRule.INSTANCE);
     }
 
+    private RelNode toPartitionInsertOverwrite(
+            RelNode basePlan, RelMetadataProvider mdProvider, RexExecutor 
executorProvider,
+            HiveRelOptMaterialization materialization, RelNode 
calcitePreMVRewritingPlan) {
+
+      if (materialization.isSourceTablesUpdateDeleteModified()) {
+        return calcitePreMVRewritingPlan;

Review comment:
       Please add a comment on why we bail out here.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -261,41 +263,61 @@ protected RelNode 
applyMaterializedViewRewriting(RelOptPlanner planner, RelNode
         if (materialization.isSourceTablesCompacted()) {
           return calcitePreMVRewritingPlan;
         }
-        // First we need to check if it is valid to convert to MERGE/INSERT 
INTO.
-        // If we succeed, we modify the plan and afterwards the AST.
-        // MV should be an acid table.
-        MaterializedViewRewritingRelVisitor visitor = new 
MaterializedViewRewritingRelVisitor();
-        visitor.go(basePlan);
-        if (visitor.isRewritingAllowed()) {
-          if (materialization.isSourceTablesUpdateDeleteModified()) {
-            if (visitor.isContainsAggregate()) {
-              if (visitor.getCountIndex() < 0) {
-                // count(*) is necessary for determine which rows should be 
deleted from the view
-                // if view definition does not have it incremental rebuild can 
not be performed, bail out
-                return calcitePreMVRewritingPlan;
-              }
-              return toAggregateInsertDeleteIncremental(basePlan, mdProvider, 
executorProvider);
-            } else {
-              return toJoinInsertDeleteIncremental(
-                      basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan);
-            }
-          } else {
-            // Trigger rewriting to remove UNION branch with MV
-            if (visitor.isContainsAggregate()) {
-              return toAggregateInsertIncremental(basePlan, mdProvider, 
executorProvider, optCluster, calcitePreMVRewritingPlan);
-            } else {
-              return toJoinInsertIncremental(basePlan, mdProvider, 
executorProvider);
-            }
-          }
-        } else if (materialization.isSourceTablesUpdateDeleteModified()) {
-          return calcitePreMVRewritingPlan;
+
+        RelNode incrementalRebuildPlan = toIncrementalRebuild(

Review comment:
       `toIncrementalRebuild` -> `toRecordIncrementalRebuildPlan` ?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveAggregatePartitionIncrementalRewritingRule.java
##########
@@ -0,0 +1,152 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules.views;/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import com.google.common.collect.ImmutableList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexTableInputRef;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+
+import static 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveCalciteUtil.findRexTableInputRefs;
+
+/**
+ * Rule to prepare the plan for incremental view maintenance if the view is 
partitioned and insert only:
+ * Insert overwrite the partitions which are affected since the last rebuild 
only and leave the
+ * rest of the partitions intact.
+ *
+ * Assume that we have a materialized view partitioned on column a and writeId 
was 1 at the last rebuild:
+ *
+ * CREATE MATERIALIZED VIEW mat1 PARTITIONED ON (a) STORED AS ORC 
TBLPROPERTIES ("transactional"="true", 
"transactional_properties"="insert_only") AS
+ * SELECT a, b, sum(c) sumc FROM t1 GROUP BY b, a;
+ *
+ * 1. Query all rows from source tables since the last rebuild.
+ * 2. Query all rows from MV which are in any of the partitions queried in 1.
+ * 3. Take the union of rows from 1. and 2. and perform the same aggregations 
defined in the MV
+ *
+ * SELECT b, sum(sumc), a FROM (
+ *     SELECT b, sumc, a FROM mat1
+ *     LEFT SEMI JOIN (SELECT b, sum(c), a FROM t1 WHERE ROW__ID.writeId > 1 
GROUP BY b, a) q ON (mat1.a <=> q.a)
+ *     UNION ALL
+ *     SELECT b, sum(c) sumc, a FROM t1 WHERE ROW__ID.writeId > 1 GROUP BY b, a
+ * ) sub
+ * GROUP BY a, b
+ */
+public class HiveAggregatePartitionIncrementalRewritingRule extends RelOptRule 
{
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregatePartitionIncrementalRewritingRule.class);
+
+  public static final HiveAggregatePartitionIncrementalRewritingRule INSTANCE =
+          new HiveAggregatePartitionIncrementalRewritingRule();
+
+  private HiveAggregatePartitionIncrementalRewritingRule() {
+    super(operand(Aggregate.class, operand(Union.class, any())),
+            HiveRelFactories.HIVE_BUILDER, 
"HiveJoinPartitionIncrementalRewritingRule");

Review comment:
       Rule name string does not match the class 
(`HiveAggregatePartitionIncrementalRewritingRule` vs 
`HiveAggregatePartitionIncrementalRewritingRule`).

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveAggregatePartitionIncrementalRewritingRule.java
##########
@@ -0,0 +1,152 @@
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules.views;/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import com.google.common.collect.ImmutableList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexTableInputRef;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+
+import static 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveCalciteUtil.findRexTableInputRefs;
+
+/**
+ * Rule to prepare the plan for incremental view maintenance if the view is 
partitioned and insert only:
+ * Insert overwrite the partitions which are affected since the last rebuild 
only and leave the
+ * rest of the partitions intact.
+ *
+ * Assume that we have a materialized view partitioned on column a and writeId 
was 1 at the last rebuild:
+ *
+ * CREATE MATERIALIZED VIEW mat1 PARTITIONED ON (a) STORED AS ORC 
TBLPROPERTIES ("transactional"="true", 
"transactional_properties"="insert_only") AS
+ * SELECT a, b, sum(c) sumc FROM t1 GROUP BY b, a;
+ *
+ * 1. Query all rows from source tables since the last rebuild.
+ * 2. Query all rows from MV which are in any of the partitions queried in 1.
+ * 3. Take the union of rows from 1. and 2. and perform the same aggregations 
defined in the MV
+ *
+ * SELECT b, sum(sumc), a FROM (

Review comment:
       Nit. Maybe you can reorder the columns the columns in the SQL in the 
example. Probably the plan looks like this but it is a bit confusing that the 
order is different that the one found in the MV definition.

##########
File path: 
ql/src/test/queries/clientpositive/materialized_view_partitioned_create_rewrite_agg.q
##########
@@ -0,0 +1,44 @@
+set hive.support.concurrency=true;
+set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
+
+CREATE TABLE t1(a int, b int,c int) STORED AS ORC TBLPROPERTIES 
('transactional' = 'true');
+
+INSERT INTO t1(a, b, c) VALUES
+(1, 1, 1),
+(1, 1, 4),
+(2, 1, 2),
+(1, 2, 10),
+(2, 2, 11),
+(1, 3, 100),
+(null, 4, 200);
+
+CREATE MATERIALIZED VIEW mat1 PARTITIONED ON (a) STORED AS ORC TBLPROPERTIES 
("transactional"="true", "transactional_properties"="insert_only") AS

Review comment:
       Can we add a couple of additional test similar to this MV?
   i) Multiple partition columns.
   ii) Multiple partition columns that do not match beginning/end of output 
columns, e.g., `SELECT a, b, c, d, e` and PARTITIONED BY `a, c, d`.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -353,6 +375,23 @@ private RelNode toJoinInsertIncremental(
               basePlan, mdProvider, executorProvider, 
HiveJoinInsertIncrementalRewritingRule.INSTANCE);
     }
 
+    private RelNode toPartitionInsertOverwrite(
+            RelNode basePlan, RelMetadataProvider mdProvider, RexExecutor 
executorProvider,
+            HiveRelOptMaterialization materialization, RelNode 
calcitePreMVRewritingPlan) {
+
+      if (materialization.isSourceTablesUpdateDeleteModified()) {
+        return calcitePreMVRewritingPlan;
+      }
+
+      RelOptHiveTable hiveTable = (RelOptHiveTable) 
materialization.tableRel.getTable();
+      if (!AcidUtils.isInsertOnlyTable(hiveTable.getHiveTableMD())) {

Review comment:
       Can't the rewriting be also applied for full ACID tables? For instance, 
it could be that the MV has functions that cannot be incrementally maintained 
at the record level but they could be at the partition level. If this is going 
to be explored in a follow-up, maybe you can leave a comment/TODO for 
clarification.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java
##########
@@ -261,41 +263,61 @@ protected RelNode 
applyMaterializedViewRewriting(RelOptPlanner planner, RelNode
         if (materialization.isSourceTablesCompacted()) {
           return calcitePreMVRewritingPlan;
         }
-        // First we need to check if it is valid to convert to MERGE/INSERT 
INTO.
-        // If we succeed, we modify the plan and afterwards the AST.
-        // MV should be an acid table.
-        MaterializedViewRewritingRelVisitor visitor = new 
MaterializedViewRewritingRelVisitor();
-        visitor.go(basePlan);
-        if (visitor.isRewritingAllowed()) {
-          if (materialization.isSourceTablesUpdateDeleteModified()) {
-            if (visitor.isContainsAggregate()) {
-              if (visitor.getCountIndex() < 0) {
-                // count(*) is necessary for determine which rows should be 
deleted from the view
-                // if view definition does not have it incremental rebuild can 
not be performed, bail out
-                return calcitePreMVRewritingPlan;
-              }
-              return toAggregateInsertDeleteIncremental(basePlan, mdProvider, 
executorProvider);
-            } else {
-              return toJoinInsertDeleteIncremental(
-                      basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan);
-            }
-          } else {
-            // Trigger rewriting to remove UNION branch with MV
-            if (visitor.isContainsAggregate()) {
-              return toAggregateInsertIncremental(basePlan, mdProvider, 
executorProvider, optCluster, calcitePreMVRewritingPlan);
-            } else {
-              return toJoinInsertIncremental(basePlan, mdProvider, 
executorProvider);
-            }
-          }
-        } else if (materialization.isSourceTablesUpdateDeleteModified()) {
-          return calcitePreMVRewritingPlan;
+
+        RelNode incrementalRebuildPlan = toIncrementalRebuild(
+                basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan, materialization);
+        if (mvRebuildMode != 
MaterializationRebuildMode.INSERT_OVERWRITE_REBUILD) {
+          return incrementalRebuildPlan;
         }
+
+        return toPartitionInsertOverwrite(
+                basePlan, mdProvider, executorProvider, materialization, 
calcitePreMVRewritingPlan);
       }
 
       // Now we trigger some needed optimization rules again
       return applyPreJoinOrderingTransforms(basePlan, mdProvider, 
executorProvider);
     }
 
+    private RelNode toIncrementalRebuild(
+            RelNode basePlan,
+            RelMetadataProvider mdProvider,
+            RexExecutor executorProvider,
+            RelOptCluster optCluster,
+            RelNode calcitePreMVRewritingPlan,
+            HiveRelOptMaterialization materialization) {
+      // First we need to check if it is valid to convert to MERGE/INSERT INTO.
+      // If we succeed, we modify the plan and afterwards the AST.
+      // MV should be an acid table.
+      MaterializedViewRewritingRelVisitor visitor = new 
MaterializedViewRewritingRelVisitor();
+      visitor.go(basePlan);
+      if (visitor.isRewritingAllowed()) {
+        if (materialization.isSourceTablesUpdateDeleteModified()) {
+          if (visitor.isContainsAggregate()) {
+            if (visitor.getCountIndex() < 0) {
+              // count(*) is necessary for determine which rows should be 
deleted from the view
+              // if view definition does not have it incremental rebuild can 
not be performed, bail out
+              return calcitePreMVRewritingPlan;
+            }
+            return toAggregateInsertDeleteIncremental(basePlan, mdProvider, 
executorProvider);
+          } else {
+            return toJoinInsertDeleteIncremental(
+                    basePlan, mdProvider, executorProvider, optCluster, 
calcitePreMVRewritingPlan);
+          }
+        } else {
+          // Trigger rewriting to remove UNION branch with MV
+          if (visitor.isContainsAggregate()) {
+            return toAggregateInsertIncremental(basePlan, mdProvider, 
executorProvider, optCluster, calcitePreMVRewritingPlan);
+          } else {
+            return toJoinInsertIncremental(basePlan, mdProvider, 
executorProvider);
+          }
+        }
+      } else if (materialization.isSourceTablesUpdateDeleteModified()) {
+        return calcitePreMVRewritingPlan;

Review comment:
       Can we add a comment on why it not necessary to apply 
`applyPreJoinOrderingTransforms` in this case?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
##########
@@ -1252,4 +1252,18 @@ public static ImmutableBitSet extractRefs(Aggregate 
aggregate) {
     }
     return refs.build();
   }
+
+  public static Set<RexTableInputRef> findRexTableInputRefs(RexNode rexNode) {

Review comment:
       Remove and use `RexUtil.gatherTableReferences`? You can pass a singleton 
list if you need to apply it on a single RexNode.

##########
File path: ql/src/test/results/clientpositive/llap/masking_mv_by_text_2.q.out
##########
@@ -25,6 +25,7 @@ POSTHOOK: type: CREATE_MATERIALIZED_VIEW
 POSTHOOK: Input: default@masking_test_n_mv
 POSTHOOK: Output: database:default
 POSTHOOK: Output: default@masking_test_view_n_mv
+POSTHOOK: Lineage: masking_test_view_n_mv.col0 EXPRESSION 
[(masking_test_n_mv)masking_test_n_mv.FieldSchema(name:key, type:int, 
comment:null), (masking_test_n_mv)masking_test_n_mv.FieldSchema(name:value, 
type:string, comment:null), ]

Review comment:
       Is this really part of this patch?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to