kbendick commented on a change in pull request #3301:
URL: https://github.com/apache/iceberg/pull/3301#discussion_r731530261



##########
File path: 
spark/v3.0/spark3/src/main/java/org/apache/iceberg/spark/source/SparkFilesScan.java
##########
@@ -51,6 +52,7 @@
     this.splitSize = readConf.splitSize();
     this.splitLookback = readConf.splitLookback();
     this.splitOpenFileCost = readConf.splitOpenFileCost();
+    this.planTasksIgnoreDeleteFiles = 
options.getBoolean(SparkReadOptions.PLAN_TASKS_IGNORE_DELETE_FILES, false);

Review comment:
       Would it make sense to add this to `SparkReadConf` instead?

##########
File path: 
spark/src/test/java/org/apache/iceberg/spark/actions/TestNewRewriteDataFilesAction.java
##########
@@ -990,6 +1047,56 @@ private void writeDF(Dataset<Row> df) {
         .save(tableLocation);
   }
 
+  private void createPositionDeleteFiles(Table table) throws IOException {
+    table.refresh();
+    PartitionKey partitionKey = null;
+    if (!table.spec().isUnpartitioned()) {
+      Record partitionRecord = GenericRecord.create(table.schema())
+          .copy(ImmutableMap.of("c1", 0, "c2", "foo0", "c3", "bar0"));
+      partitionKey = new PartitionKey(table.spec(), table.schema());
+      partitionKey.partition(partitionRecord);
+    }
+
+    OutputFileFactory fileFactory = OutputFileFactory.builderFor(table, 1, 
1).format(FileFormat.PARQUET).build();
+    GenericAppenderFactory appenderFactory = new GenericAppenderFactory(
+        table.schema(), table.spec(), null, null, null);
+    EncryptedOutputFile out = createEncryptedOutputFile(partitionKey, 
fileFactory);
+    PositionDeleteWriter<Record> deleteWriter = 
appenderFactory.newPosDeleteWriter(
+        out, FileFormat.PARQUET, partitionKey);
+    CloseableIterable<DataFile> dataFiles;
+    if (table.spec().isUnpartitioned()) {
+      dataFiles = CloseableIterable.transform(table.newScan().planFiles(), 
FileScanTask::file);
+    } else {
+      dataFiles = CloseableIterable.transform(
+          table.newScan().filter(Expressions.equal("c1", 0)).planFiles(), 
FileScanTask::file);
+    }

Review comment:
       Nit: Can you add a blank line after this one for logical grouping of 
stuff?

##########
File path: 
spark/v3.0/spark3/src/main/java/org/apache/iceberg/spark/source/SparkFilesScan.java
##########
@@ -65,9 +67,12 @@
       CloseableIterable<FileScanTask> splitFiles = TableScanUtil.splitFiles(
           CloseableIterable.withNoopClose(files),
           splitSize);
-      CloseableIterable<CombinedScanTask> scanTasks = TableScanUtil.planTasks(
-          splitFiles, splitSize,
-          splitLookback, splitOpenFileCost);
+      CloseableIterable<CombinedScanTask> scanTasks;
+      if (planTasksIgnoreDeleteFiles) {
+        scanTasks = TableScanUtil.planTasksIgnoreDeleteFiles(splitFiles, 
splitSize, splitLookback, splitOpenFileCost);
+      } else {
+        scanTasks = TableScanUtil.planTasks(splitFiles, splitSize, 
splitLookback, splitOpenFileCost);
+      }

Review comment:
       This feels like `planTasksIgnoreDeleteFiles` should be a boolean 
argument.
   
   This way, you could have something like the following over the current 
if-else block.
   ```java
   CloseableIterable<CombinedScanTask> scanTasks = 
TableScanUtil.planTasks(splitFiles, splitSize, splitLookback, 
splitOpenFileCost, planTasksIgnoreDeleteFiles);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to