HyukjinKwon commented on a change in pull request #30562:
URL: https://github.com/apache/spark/pull/30562#discussion_r534064472
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsDelete.java
##########
@@ -28,6 +28,25 @@
*/
@Evolving
public interface SupportsDelete {
+
+ /**
+ * Checks whether it is possible to delete data from a data source table
that matches filter
+ * expressions.
+ * <p>
+ * Rows should be deleted from the data source iff all of the filter
expressions match.
+ * That is, the expressions must be interpreted as a set of filters that are
ANDed together.
+ * <p>
+ * Spark will call this method to check if the delete is possible without
significant effort.
Review comment:
I don't think simply planning time vs runtime justifies the API that
dose the same thing that another API does. `SupportsDelete.deleteWhere`
documents its purpose of failure cases which can fail fast as well (although
it's runtime).
Since it's planned to be used for something else as you elaborated, I think
it should be best to make sure we all know the full context. We can think about
other shapes of the API, for example, returning unhandled filters to make the
error message to show which filters are not supported.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]