wypoon commented on a change in pull request #2512:
URL: https://github.com/apache/iceberg/pull/2512#discussion_r619939137
##########
File path:
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/ReplaceData.scala
##########
@@ -25,4 +25,12 @@ import org.apache.spark.sql.connector.write.BatchWrite
case class ReplaceData(
table: NamedRelation,
write: BatchWrite,
- query: LogicalPlan) extends V2WriteCommand
+ query: LogicalPlan) extends V2WriteCommand {
+
+ def isByName: Boolean = false
Review comment:
I am not sure if we want this to be true or false. I suspect it doesn't
really matter.
##########
File path: spark3/src/test/java/org/apache/iceberg/spark/sql/TestDeleteFrom.java
##########
@@ -41,6 +43,7 @@ public void removeTables() {
@Test
public void testDeleteFromUnpartitionedTable() {
+ Assume.assumeFalse(Spark3VersionUtil.isSpark31());
Review comment:
@aokolnychyi this test and the next fail in Spark 3.1. In
https://github.com/apache/spark/blob/v3.1.1/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala#L250-L253,
if `canDeleteWhere` returns false, there is no attempt to rewrite the query,
an exception is just thrown. Is Spark supposed to rewrite the query if
`SupportsDelete#canDeleteWhere` returns false? If so, this part hasn't been
implemented in Spark yet.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]