soumyakanti3578 commented on code in PR #5196:
URL: https://github.com/apache/hive/pull/5196#discussion_r2040366039


##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java:
##########
@@ -292,7 +293,11 @@ protected RelNode 
applyMaterializedViewRewriting(RelOptPlanner planner, RelNode
       basePlan = executeProgram(basePlan, program.build(), mdProvider, 
executorProvider, singletonList(materialization));
 
       program = new HepProgramBuilder();
-      generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST, 
HivePushdownSnapshotFilterRule.INSTANCE);
+      generatePartialProgram(program,
+          false,
+          HepMatchOrder.DEPTH_FIRST,
+          HiveSearchRules.FILTER_SEARCH_EXPAND,

Review Comment:
   We don't expect SEARCH in PROJECTs and JOINs here?



##########
iceberg/iceberg-handler/src/test/results/positive/query_iceberg_metadata_of_unpartitioned_table.q.out:
##########


Review Comment:
   @zabetak Looks like this was added by mistake?



##########
ql/src/test/queries/clientpositive/search_nullability.q:
##########


Review Comment:
   Do we still need this test as we won't see `SEARCH` operator in the plans ?



##########
ql/src/java/org/apache/hadoop/hive/ql/parse/type/HiveFunctionHelper.java:
##########


Review Comment:
   There are a couple of `TODO`s here, but I think these will be resolved later 
in a separate PR?



##########
ql/src/test/results/clientpositive/llap/cbo_join_transitive_pred_loop_1.q.out:
##########
@@ -67,6 +67,6 @@ HiveProject(month=[$0], con_usd=[$2])
           HiveFilter(condition=[=($0, 202110)])
             HiveTableScan(table=[[default, test2]], table:alias=[test2])
     HiveProject(mth=[$0], con_usd=[$1])
-      HiveFilter(condition=[IS NOT NULL($0)])
+      HiveFilter(condition=[AND(IS NOT NULL($0), IN($0, 202110, 202503))])
         HiveTableScan(table=[[default, test3]], table:alias=[d])

Review Comment:
   `202503` is a bug most probably.



##########
iceberg/iceberg-handler/src/test/results/positive/llap/llap_iceberg_read_orc.q.out:
##########
@@ -359,11 +359,11 @@ STAGE PLANS:
             Map Operator Tree:
                 TableScan
                   alias: o
-                  filterExpr: ((quantity NOT BETWEEN 39 AND 0 or quantity NOT 
BETWEEN 69 AND 39 or (quantity > 70)) and (((quantity > 0) and (quantity < 39)) 
or ((quantity > 39) and (quantity < 69)) or (quantity > 70)) and itemid is not 
null) (type: boolean)
+                  filterExpr: ((((quantity > 0) and (quantity < 39)) or 
((quantity > 39) and (quantity < 69)) or (quantity > 70)) and quantity is not 
null and itemid is not null) (type: boolean)
                   probeDecodeDetails: cacheKey:HASH_MAP_MAPJOIN_29_container, 
bigKeyColName:itemid, smallTablePos:1, keyRatio:0.9523809523809523

Review Comment:
   just highlighting here that we are expanding `NOT BETWEEN`s now.



##########
iceberg/iceberg-handler/src/test/results/positive/llap/iceberg_bucket_map_join_7.q.out:
##########
@@ -150,27 +150,27 @@ Stage-0
       File Output Operator [FS_61]
         Limit [LIM_60] (rows=20 width=447)
           Number of rows:20
-          Select Operator [SEL_59] (rows=791 width=447)
+          Select Operator [SEL_59] (rows=473 width=447)
             Output:["_col0","_col1","_col2","_col3","_col4"]
           <-Map 1 [SIMPLE_EDGE] vectorized, llap
             SHUFFLE [RS_58]
-              Top N Key Operator [TNK_57] (rows=791 width=447)
+              Top N Key Operator [TNK_57] (rows=473 width=447)
                 keys:_col0,top n:20
-                Map Join Operator [MAPJOIN_56] (rows=791 width=447)
+                Map Join Operator [MAPJOIN_56] (rows=473 width=447)
                   BucketMapJoin:true,Conds:SEL_55._col0, _col1=RS_53._col0, 
_col1(Inner),Output:["_col0","_col1","_col2","_col3","_col4"]
                 <-Map 3 [CUSTOM_EDGE] vectorized, llap
                   MULTICAST [RS_53]
                     PartitionCols:_col0, _col1
-                    Select Operator [SEL_52] (rows=500 width=178)
+                    Select Operator [SEL_52] (rows=387 width=178)
                       Output:["_col0","_col1"]
-                      Filter Operator [FIL_51] (rows=500 width=178)
-                        predicate:((key <> '0') and (key <> '100') and value 
is not null)
+                      Filter Operator [FIL_51] (rows=387 width=178)
+                        predicate:(((key < '0') or ((key > '0') and (key < 
'100')) or (key > '100')) and key is not null and value is not null)
                         TableScan [TS_3] (rows=500 width=178)
                           
default@src,b,Tbl:COMPLETE,Col:COMPLETE,Output:["key","value"]
-                <-Select Operator [SEL_55] (rows=500 width=269)
+                <-Select Operator [SEL_55] (rows=387 width=269)
                     Output:["_col0","_col1","_col2"]
-                    Filter Operator [FIL_54] (rows=500 width=269)
-                      predicate:((key1 <> '0') and (key1 <> '100') and key2 is 
not null)
+                    Filter Operator [FIL_54] (rows=387 width=269)
+                      predicate:(((key1 < '0') or ((key1 > '0') and (key1 < 
'100')) or (key1 > '100')) and key1 is not null and key2 is not null)
                       TableScan [TS_0] (rows=500 width=269)

Review Comment:
   I saw this in other files too, but it looks like we are splitting `<>` now.



##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/RexNodeConverter.java:
##########
@@ -268,6 +270,7 @@ private RexNode convert(ExprNodeGenericFuncDesc func) 
throws SemanticException {
           // except complex types
           // Rewrite to OR is done only if number of operands are less than
           // the threshold configured
+          // TODO Is this rewrite unconditional?

Review Comment:
   Do we need to resolve this `TODO` before merging?



##########
ql/src/test/queries/clientpositive/search.q:
##########


Review Comment:
   Or do we even need this test file anymore?



##########
ql/src/test/results/clientpositive/llap/bucketpruning1.q.out:
##########
@@ -1656,7 +1656,7 @@ POSTHOOK: type: QUERY
 POSTHOOK: Input: default@srcbucket_pruned
 #### A masked pattern was here ####
 OPTIMIZED SQL: SELECT *
-FROM (VALUES  (NULL, NULL, NULL)) AS `t` (`key`, `value`, `ds`)
+FROM (SELECT NULL AS `key`, NULL AS `value`, NULL AS `ds`) AS `t`

Review Comment:
   We are not creating `VALUES` here. Is this an optimization since all the 
values are `NULL`s?



##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RangeConverter.java:
##########
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.optimizer.calcite;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.util.RangeSets;
+import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveBetween;
+
+import java.util.ArrayList;
+import java.util.List;
+
+class RangeConverter<C extends Comparable<C>> implements RangeSets.Consumer<C> 
{
+
+  protected final RexBuilder rexBuilder;
+  protected final RelDataType type;
+  protected final RexNode ref;
+  public final List<RexNode> inNodes;
+  public final List<RexNode> nodes;

Review Comment:
   nit: Maybe we need to rename `inNodes`  to something more descriptive like 
`inNodeLiterals` or `inLiterals`. Also it looks like `inNodes` will always have 
`RexLiteral`s now, so its type can be changed too. Earlier we were also storing 
`ref` of type `RexInputRef`, so `RexNode` was needed.



##########
ql/src/test/queries/clientpositive/search.q:
##########


Review Comment:
   This file may need some cleaning up



##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ExprNodeConverter.java:
##########
@@ -203,6 +207,32 @@ public ExprNodeDesc visitCall(RexCall call) {
       for (RexNode operand : call.operands) {
         args.add(operand.accept(this));
       }
+    } else if (call.getKind() == SqlKind.SEARCH) {

Review Comment:
   Resolving comment as this is no longer relevant.



##########
ql/src/test/results/clientpositive/llap/bucketsortoptimize_insert_7.q.out:
##########
@@ -95,10 +95,10 @@ STAGE PLANS:
             Map Operator Tree:
                 TableScan
                   alias: b
-                  filterExpr: (key) IN (0, 5) (type: boolean)
+                  filterExpr: ((key) IN (0, 5) and key is not null) (type: 
boolean)
                   Statistics: Num rows: 84 Data size: 7896 Basic stats: 
COMPLETE Column stats: COMPLETE

Review Comment:
   Extra `is not null` here but it may be alright as `key` is in join.



##########
packaging/pom.xml:
##########
@@ -173,6 +173,10 @@
                   \Qhttps://www.eclipse.org/org/documents/epl-v10.php\E
                   \Qhttp://www.eclipse.org/org/documents/epl-v10.php\E
                 </EPL-1.0>
+                <EPL-2.0>
+                  
\Qhttps://raw.githubusercontent.com/locationtech/jts/master/LICENSE_EPLv2.txt\E
+                  \Qhttps://www.eclipse.org/org/documents/epl-2.0/EPL-2.0.txt\E
+                </EPL-2.0>

Review Comment:
   @zabetak @kasakrisz What do you think about this? Can this be be resolved?



##########
ql/src/java/org/apache/hadoop/hive/ql/parse/type/RexNodeExprFactory.java:
##########
@@ -615,8 +616,9 @@ protected RexNode createConstantExpr(TypeInfo typeInfo, 
Object constantValue)
           SqlStdOperatorTable.ROW,
           operands);
     }
-    return rexBuilder.makeLiteral(constantValue,
-        TypeConverter.convert(typeInfo, rexBuilder.getTypeFactory()), false);
+    RelDataType finalType = TypeConverter.convert(typeInfo, 
rexBuilder.getTypeFactory());
+    boolean allowCast = finalType.getFamily() == SqlTypeFamily.CHARACTER;
+    return rexBuilder.makeLiteral(constantValue, finalType, allowCast);

Review Comment:
   Resolving this. Please reopen if you think we need to change this.



##########
ql/src/test/results/clientpositive/llap/concat_op.q.out:
##########


Review Comment:
   Looks like we have `{fields: {fields: [...] } }` here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org

Reply via email to