MaxGekk commented on a change in pull request #31475:
URL: https://github.com/apache/spark/pull/31475#discussion_r578171819
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsDelete.java
##########
@@ -68,4 +69,15 @@ default boolean canDeleteWhere(Filter[] filters) {
* @throws IllegalArgumentException If the delete is rejected due to
required effort
*/
void deleteWhere(Filter[] filters);
+
+ Filter[] ALWAYS_TRUE_FILTER = new Filter[] { new AlwaysTrue() };
Review comment:
How about to revert this commit
https://github.com/apache/spark/pull/31475/commits/d1e5a18066f9fb2ff0ca1504e7c3f0802905febd
, and implement it as:
```scala
default boolean truncateTable() {
Filter[] filters = new Filter[] { new AlwaysTrue() };
...
```
I am not sure it is worth to do this premature optimization. Comparing to
the truncation op, the allocation overhead is small. If it is a hot spot, JVM
should do all work for us and optimize it, I do believe.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]