aokolnychyi commented on a change in pull request #35395:
URL: https://github.com/apache/spark/pull/35395#discussion_r807417975



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala
##########
@@ -255,7 +255,13 @@ class DataSourceV2Strategy(session: SparkSession) extends 
Strategy with Predicat
     case OverwritePartitionsDynamic(r: DataSourceV2Relation, query, _, _, 
Some(write)) =>
       OverwritePartitionsDynamicExec(planLater(query), refreshCache(r), write) 
:: Nil
 
-    case DeleteFromTable(relation, condition) =>
+    case DeleteFromTableWithFilters(r: DataSourceV2Relation, filters) =>
+      DeleteFromTableExec(r.table.asDeletable, filters.toArray, 
refreshCache(r)) :: Nil
+
+    case DeleteFromTable(_, _, Some(rewritePlan)) =>
+      planLater(rewritePlan) :: Nil
+
+    case DeleteFromTable(relation, condition, None) =>

Review comment:
       That's a good question. Technically, 
`OptimizeMetadataOnlyDeleteFromTable` has almost identical logic but applies 
only to plans that reference `ReplaceData` and does not throw any exceptions. 
Another substantial difference is the new rule does the check in the optimizer 
while the current check is done during physical planning. 
   
   I am looking for feedback on this one.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to