aokolnychyi commented on a change in pull request #4141:
URL: https://github.com/apache/iceberg/pull/4141#discussion_r810427735



##########
File path: 
spark/v3.2/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestExpireSnapshotsProcedure.java
##########
@@ -224,4 +236,60 @@ public void 
testConcurrentExpireSnapshotsWithInvalidInput() {
             catalogName, tableIdent, -1));
 
   }
+
+  @Test
+  public void testExpireDeleteFiles() throws Exception {
+    sql("CREATE TABLE %s (id bigint NOT NULL, data string) USING iceberg 
TBLPROPERTIES" +
+        "('format-version'='2', 'write.delete.mode'='merge-on-read')", 
tableName);
+
+    sql("INSERT INTO TABLE %s VALUES (1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')", 
tableName);

Review comment:
       I am afraid we don't know the number of files that this insert will 
produce. That's why ID = 1 may end up in a separate file (unlikely but 
possible). If we write just a single file with ID = 1, the DELETE operation 
below will be a metadata operation and the test will fail.
   
   I think it would be safer to use a typed `Dataset` and `SimpleRecord`. That 
way, we can call `coalesce(1)` before writing to make sure we produce only 1 
file and the subsequent DELETE operation will produce a delete file.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to