yihua commented on code in PR #5943:
URL: https://github.com/apache/hudi/pull/5943#discussion_r929446058


##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/TestAlterTableDropPartition.scala:
##########
@@ -211,7 +211,7 @@ class TestAlterTableDropPartition extends 
HoodieSparkSqlTestBase {
 
     // specify duplicate partition columns
     checkExceptionContain(s"alter table $tableName drop partition 
(dt='2021-10-01', dt='2021-10-02')")(
-      "Found duplicate keys 'dt'")
+      "Found duplicate keys ")

Review Comment:
   Could we do strict checks based on Spark version like 
`TestParquetColumnProjection`, so that each Spark version has different check:
   ```
   // Stats for the reads fetching only _projected_ columns (note how amount of 
bytes read
       // increases along w/ the # of columns)
       val projectedColumnsReadStats: Array[(String, Long)] =
         if (HoodieSparkUtils.isSpark3)
           Array(
             ("rider", 2363),
             ("rider,driver", 2463),
             ("rider,driver,tip_history", 3428))
         else if (HoodieSparkUtils.isSpark2)
           Array(
             ("rider", 2474),
             ("rider,driver", 2614),
             ("rider,driver,tip_history", 3629))
         else
           fail("Only Spark 3 and Spark 2 are currently supported")
   ```



##########
hudi-spark-datasource/hudi-spark2/src/main/scala/org/apache/spark/sql/adapter/Spark2Adapter.scala:
##########
@@ -122,6 +123,26 @@ class Spark2Adapter extends SparkAdapter {
     InterpretedPredicate.create(e)
   }
 
+  override def createHoodieFileScanRDD(sparkSession: SparkSession,
+                                       readFunction: PartitionedFile => 
Iterator[InternalRow],
+                                       filePartitions: Seq[FilePartition],
+                                       readDataSchema: StructType,
+                                       metadataColumns: 
Seq[AttributeReference] = Seq.empty): FileScanRDD = {
+    new Spark2HoodieFileScanRDD(sparkSession, readFunction, filePartitions)
+  }
+
+  override def resolveDeleteFromTable(dft: Command,

Review Comment:
   nit: `dft`: better naming across classes?



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/DeleteHoodieTableCommand.scala:
##########
@@ -36,9 +37,13 @@ case class DeleteHoodieTableCommand(deleteTable: 
DeleteFromTable) extends Hoodie
 
     // Remove meta fields from the data frame
     var df = removeMetaFields(Dataset.ofRows(sparkSession, table))
-    if (deleteTable.condition.isDefined) {
-      df = df.filter(Column(deleteTable.condition.get))
+    // SPARK-38626 DeleteFromTable.condition is changed from 
Option[Expression] to Expression in Spark 3.3
+    val condition: Expression = deleteTable.condition match {
+      case option: Option[Expression] => option.getOrElse(null)
+      case expr: Expression => expr
+      case _ => throw new IllegalArgumentException(s"DeleteFromTable.condition 
has to be either Option[Expression] or Expression")

Review Comment:
   Is this because of the API compatibility, so both `Option[Expression]` and 
`Expression` typed value can exist?  Nevertheless, looks not clean.



##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieAnalysis.scala:
##########
@@ -421,12 +433,10 @@ case class HoodieResolveReferences(sparkSession: 
SparkSession) extends Rule[Logi
       UpdateTable(table, resolvedAssignments, resolvedCondition)
 
     // Resolve Delete Table
-    case DeleteFromTable(table, condition)
+    case dft @ DeleteFromTable(table, condition)
       if sparkAdapter.isHoodieTable(table, sparkSession) && table.resolved =>
-      // Resolve condition
-      val resolvedCondition = condition.map(resolveExpressionFrom(table)(_))
-      // Return the resolved DeleteTable
-      DeleteFromTable(table, resolvedCondition)
+      val resolveExpression = resolveExpressionFrom(table, None)_
+      sparkAdapter.resolveDeleteFromTable(dft, resolveExpression)

Review Comment:
   Do we want to wrap the whole thing in a SparkAdapter method?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to