xloya commented on a change in pull request #3724:
URL: https://github.com/apache/iceberg/pull/3724#discussion_r768259735



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteDataFilesAction.java
##########
@@ -234,12 +236,220 @@ public void testBinPackWithDeletes() throws Exception {
     Assert.assertEquals("7 rows are removed", total - 7, actualRecords.size());
   }
 
+  @Test
+  public void testBinPackWithDeleteAllDataAndDataFileHasGroupOffsets() {
+    Map<String, String> options = new HashMap<>();
+    options.put(TableProperties.FORMAT_VERSION, "2");
+    options.put(TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES, "1024");
+    options.put(TableProperties.PARQUET_PAGE_SIZE_BYTES, "256");
+    options.put(TableProperties.PARQUET_DICT_SIZE_BYTES, "512");
+    Table table = createTablePartitioned(1, 1, options);
+    shouldHaveFiles(table, 1);
+    table.refresh();
+
+    CloseableIterable<FileScanTask> tasks = table.newScan().planFiles();
+    List<DataFile> dataFiles = 
Lists.newArrayList(CloseableIterable.transform(tasks, FileScanTask::file));
+    int total = (int) 
dataFiles.stream().mapToLong(ContentFile::recordCount).sum();
+
+    RowDelta rowDelta = table.newRowDelta();
+    // remove all data
+    writePosDeletesToFile(table, dataFiles.get(0), total)
+        .forEach(rowDelta::addDeletes);
+
+    rowDelta.commit();
+    table.refresh();
+
+    AssertHelpers.assertThrows("Expected an exception",
+        RuntimeException.class,
+        () -> actions().rewriteDataFiles(table)
+            .option(BinPackStrategy.MIN_FILE_SIZE_BYTES, "0")
+            .option(RewriteDataFiles.TARGET_FILE_SIZE_BYTES, 
Long.toString(Long.MAX_VALUE - 1))
+            .option(BinPackStrategy.MAX_FILE_SIZE_BYTES, 
Long.toString(Long.MAX_VALUE))
+            .option(BinPackStrategy.DELETE_FILE_THRESHOLD, "1")
+            .option(RewriteDataFiles.USE_STARTING_SEQUENCE_NUMBER, "false")
+            .execute());
+  }
+
+  @Test
+  public void 
testBinPackWithDeleteAllDataAndDataFileHasGroupOffsetsWithSeqNum() {
+    Map<String, String> options = new HashMap<>();
+    options.put(TableProperties.FORMAT_VERSION, "2");
+    options.put(TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES, "1024");
+    options.put(TableProperties.PARQUET_PAGE_SIZE_BYTES, "256");
+    options.put(TableProperties.PARQUET_DICT_SIZE_BYTES, "512");
+    Table table = createTablePartitioned(1, 1, options);
+    shouldHaveFiles(table, 1);
+    table.refresh();
+
+    CloseableIterable<FileScanTask> tasks = table.newScan().planFiles();
+    List<DataFile> dataFiles = 
Lists.newArrayList(CloseableIterable.transform(tasks, FileScanTask::file));
+    int total = (int) 
dataFiles.stream().mapToLong(ContentFile::recordCount).sum();
+
+    RowDelta rowDelta = table.newRowDelta();
+    // remove all data
+    writePosDeletesToFile(table, dataFiles.get(0), total)
+        .forEach(rowDelta::addDeletes);
+
+    rowDelta.commit();
+    table.refresh();
+
+    AssertHelpers.assertThrows("Expected an exception",
+        RuntimeException.class,

Review comment:
       According to my current implementation method, it is actually unable to 
handle the situation that multiple row groups of a parquet data file may be 
split into multiple tasks when reading, so I hope to keep throwing exceptions 
first




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to