rdblue commented on a change in pull request #2134:
URL: https://github.com/apache/iceberg/pull/2134#discussion_r562805335



##########
File path: 
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/optimizer/OptimizeConditionsInRowLevelOperations.scala
##########
@@ -23,14 +23,16 @@ import org.apache.spark.sql.SparkSession
 import org.apache.spark.sql.catalyst.expressions.{Expression, Literal, 
SubqueryExpression}
 import org.apache.spark.sql.catalyst.plans.logical.{DeleteFromTable, Filter, 
LocalRelation, LogicalPlan}
 import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.catalyst.utils.IcebergTable
 import org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanRelation
 
 // we have to optimize expressions used in delete/update before we can rewrite 
row-level operations
 // otherwise, we will have to deal with redundant casts and will not detect 
noop deletes
 // it is a temp solution since we cannot inject rewrite of row-level ops after 
operator optimizations
 object OptimizeConditionsInRowLevelOperations extends Rule[LogicalPlan] {
   override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
-    case d @ DeleteFromTable(table, cond) if 
!SubqueryExpression.hasSubquery(cond.getOrElse(Literal.TrueLiteral)) =>
+    case d @ DeleteFromTable(table @ IcebergTable(_), cond)

Review comment:
       Here, the name `table` can be moved into the matcher as well.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to