yihua commented on code in PR #13736:
URL: https://github.com/apache/hudi/pull/13736#discussion_r2292427332


##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/ShowCleansProcedure.scala:
##########
@@ -35,7 +35,8 @@ class ShowCleansProcedure(includePartitionMetadata: Boolean) 
extends BaseProcedu
   private val PARAMETERS = Array[ProcedureParameter](
     ProcedureParameter.required(0, "table", DataTypes.StringType),
     ProcedureParameter.optional(1, "limit", DataTypes.IntegerType, 10),
-    ProcedureParameter.optional(2, "showArchived", DataTypes.BooleanType, 
false)
+    ProcedureParameter.optional(2, "showArchived", DataTypes.BooleanType, 
false),
+    ProcedureParameter.optional(3, "filter", DataTypes.StringType, "")

Review Comment:
   Same on adding docs on `filter`



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/ShowCleansPlanProcedure.scala:
##########
@@ -184,7 +199,8 @@ object ShowCleansPlanProcedure {
   private val PARAMETERS = Array[ProcedureParameter](
     ProcedureParameter.required(0, "table", DataTypes.StringType),
     ProcedureParameter.optional(1, "limit", DataTypes.IntegerType, 10),
-    ProcedureParameter.optional(2, "showArchived", DataTypes.BooleanType, 
false)
+    ProcedureParameter.optional(2, "showArchived", DataTypes.BooleanType, 
false),
+    ProcedureParameter.optional(3, "filter", DataTypes.StringType, "")

Review Comment:
   Add docs on how this `filter` should be used with examples (the details can 
be added to scaladocs of `ShowCleansPlanProcedure` class)?



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/HoodieProcedureFilterUtils.scala:
##########
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.spark.sql.{Row, SparkSession}
+import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute
+import org.apache.spark.sql.catalyst.expressions.{Expression, 
GenericInternalRow}
+import org.apache.spark.sql.catalyst.util.DateTimeUtils
+import org.apache.spark.sql.types.{DataType, StructType}
+import org.apache.spark.unsafe.types.UTF8String
+
+import scala.collection.JavaConverters._
+import scala.util.{Failure, Success, Try}
+
+/**
+ * Utility object for filtering procedure results using SQL expressions.
+ *
+ * Supports all Spark SQL data types including:
+ * - Primitive types: Boolean, Byte, Short, Int, Long, Float, Double, String, 
Binary
+ * - Date/Time types: Date, Timestamp, Instant, LocalDate, LocalDateTime
+ * - Decimal types: BigDecimal with precision/scale
+ * - Complex types: Array, Map, Struct (Row)
+ * - Nested combinations of all above types
+ */
+object HoodieProcedureFilterUtils {
+
+  /**
+   * Evaluates a SQL filter expression against a sequence of rows.
+   *
+   * @param rows             The rows to filter
+   * @param filterExpression SQL expression string
+   * @param schema           The schema of the rows
+   * @param sparkSession     Spark session for expression parsing
+   * @return Filtered rows that match the expression
+   */
+  def evaluateFilter(
+                      rows: Seq[Row],

Review Comment:
   nit: in Hudi repo, these are usually put into one line



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/HoodieProcedureFilterUtils.scala:
##########
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.spark.sql.{Row, SparkSession}
+import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute
+import org.apache.spark.sql.catalyst.expressions.{Expression, 
GenericInternalRow}
+import org.apache.spark.sql.catalyst.util.DateTimeUtils
+import org.apache.spark.sql.types.{DataType, StructType}
+import org.apache.spark.unsafe.types.UTF8String
+
+import scala.collection.JavaConverters._
+import scala.util.{Failure, Success, Try}
+
+/**
+ * Utility object for filtering procedure results using SQL expressions.
+ *
+ * Supports all Spark SQL data types including:
+ * - Primitive types: Boolean, Byte, Short, Int, Long, Float, Double, String, 
Binary
+ * - Date/Time types: Date, Timestamp, Instant, LocalDate, LocalDateTime
+ * - Decimal types: BigDecimal with precision/scale
+ * - Complex types: Array, Map, Struct (Row)
+ * - Nested combinations of all above types
+ */
+object HoodieProcedureFilterUtils {
+
+  /**
+   * Evaluates a SQL filter expression against a sequence of rows.
+   *
+   * @param rows             The rows to filter
+   * @param filterExpression SQL expression string
+   * @param schema           The schema of the rows
+   * @param sparkSession     Spark session for expression parsing
+   * @return Filtered rows that match the expression
+   */
+  def evaluateFilter(
+                      rows: Seq[Row],
+                      filterExpression: String,
+                      schema: StructType,
+                      sparkSession: SparkSession
+                    ): Seq[Row] = {
+
+    if (filterExpression == null || filterExpression.trim.isEmpty) {
+      rows
+    } else {
+      Try {
+        val parsedExpr = 
sparkSession.sessionState.sqlParser.parseExpression(filterExpression)
+
+        rows.filter { row =>
+          evaluateExpressionOnRow(parsedExpr, row, schema)
+        }
+      } match {
+        case Success(filteredRows) => filteredRows
+        case Failure(exception) =>
+          throw new IllegalArgumentException(
+            s"Failed to parse or evaluate filter expression 
'$filterExpression': ${exception.getMessage}",
+            exception
+          )
+      }
+    }
+  }
+
+  private def evaluateExpressionOnRow(
+                                       expression: Expression,
+                                       row: Row,
+                                       schema: StructType
+                                     ): Boolean = {
+
+    val internalRow = convertRowToInternalRow(row, schema)
+
+    Try {
+      // First pass: bind attributes
+      val attributeBound = expression.transform {
+        case attr: org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute 
=>
+          try {
+            val fieldIndex = schema.fieldIndex(attr.name)
+            val field = schema.fields(fieldIndex)
+            
org.apache.spark.sql.catalyst.expressions.BoundReference(fieldIndex, 
field.dataType, field.nullable)
+          } catch {
+            case _: IllegalArgumentException => attr
+          }
+      }
+
+      // Second pass: resolve functions
+      val functionResolved = attributeBound.transform {
+        case unresolvedFunc: 
org.apache.spark.sql.catalyst.analysis.UnresolvedFunction =>
+          unresolvedFunc.nameParts.head.toLowerCase match {
+            case "upper" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Upper(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "lower" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Lower(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "length" | "len" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Length(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "trim" =>

Review Comment:
   The function is manually mapped to Spark expression.  Is it possible to make 
it go through Spark's own logic of resolving the function?  @jonvex do you know?



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/HoodieProcedureFilterUtils.scala:
##########
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.spark.sql.{Row, SparkSession}
+import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute
+import org.apache.spark.sql.catalyst.expressions.{Expression, 
GenericInternalRow}
+import org.apache.spark.sql.catalyst.util.DateTimeUtils
+import org.apache.spark.sql.types.{DataType, StructType}
+import org.apache.spark.unsafe.types.UTF8String
+
+import scala.collection.JavaConverters._
+import scala.util.{Failure, Success, Try}
+
+/**
+ * Utility object for filtering procedure results using SQL expressions.
+ *
+ * Supports all Spark SQL data types including:
+ * - Primitive types: Boolean, Byte, Short, Int, Long, Float, Double, String, 
Binary
+ * - Date/Time types: Date, Timestamp, Instant, LocalDate, LocalDateTime
+ * - Decimal types: BigDecimal with precision/scale
+ * - Complex types: Array, Map, Struct (Row)
+ * - Nested combinations of all above types
+ */
+object HoodieProcedureFilterUtils {
+
+  /**
+   * Evaluates a SQL filter expression against a sequence of rows.
+   *
+   * @param rows             The rows to filter
+   * @param filterExpression SQL expression string
+   * @param schema           The schema of the rows
+   * @param sparkSession     Spark session for expression parsing
+   * @return Filtered rows that match the expression
+   */
+  def evaluateFilter(
+                      rows: Seq[Row],
+                      filterExpression: String,
+                      schema: StructType,
+                      sparkSession: SparkSession
+                    ): Seq[Row] = {
+
+    if (filterExpression == null || filterExpression.trim.isEmpty) {
+      rows
+    } else {
+      Try {
+        val parsedExpr = 
sparkSession.sessionState.sqlParser.parseExpression(filterExpression)
+
+        rows.filter { row =>
+          evaluateExpressionOnRow(parsedExpr, row, schema)
+        }
+      } match {
+        case Success(filteredRows) => filteredRows
+        case Failure(exception) =>
+          throw new IllegalArgumentException(
+            s"Failed to parse or evaluate filter expression 
'$filterExpression': ${exception.getMessage}",
+            exception
+          )
+      }
+    }
+  }
+
+  private def evaluateExpressionOnRow(
+                                       expression: Expression,
+                                       row: Row,
+                                       schema: StructType
+                                     ): Boolean = {
+
+    val internalRow = convertRowToInternalRow(row, schema)
+
+    Try {
+      // First pass: bind attributes
+      val attributeBound = expression.transform {
+        case attr: org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute 
=>
+          try {
+            val fieldIndex = schema.fieldIndex(attr.name)
+            val field = schema.fields(fieldIndex)
+            
org.apache.spark.sql.catalyst.expressions.BoundReference(fieldIndex, 
field.dataType, field.nullable)
+          } catch {
+            case _: IllegalArgumentException => attr
+          }
+      }
+
+      // Second pass: resolve functions
+      val functionResolved = attributeBound.transform {
+        case unresolvedFunc: 
org.apache.spark.sql.catalyst.analysis.UnresolvedFunction =>
+          unresolvedFunc.nameParts.head.toLowerCase match {
+            case "upper" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Upper(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "lower" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Lower(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "length" | "len" =>
+              if (unresolvedFunc.arguments.length == 1) {
+                
org.apache.spark.sql.catalyst.expressions.Length(unresolvedFunc.arguments.head)
+              } else {
+                unresolvedFunc
+              }
+            case "trim" =>

Review Comment:
   For now, as long as this util is only used by Spark procedures for debugging 
purposes, it's OK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to