kbendick commented on a change in pull request #4307: URL: https://github.com/apache/iceberg/pull/4307#discussion_r828172880
########## File path: spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/BaseDeleteOrphanFilesSparkAction.java ########## @@ -205,15 +211,27 @@ private String jobDesc() { JavaRDD<String> subDirRDD = sparkContext().parallelize(subDirs, parallelism); Broadcast<SerializableConfiguration> conf = sparkContext().broadcast(hadoopConf); - JavaRDD<String> matchingLeafFileRDD = subDirRDD.mapPartitions(listDirsRecursively(conf, olderThanTimestamp)); + JavaRDD<String> matchingLeafFileRDD = + subDirRDD.mapPartitions(listDirsRecursively(conf, olderThanTimestamp, filter)); JavaRDD<String> completeMatchingFileRDD = matchingFileRDD.union(matchingLeafFileRDD); return spark().createDataset(completeMatchingFileRDD.rdd(), Encoders.STRING()).toDF("file_path"); } + private PathFilter pathFilter(PartitionSpec spec) { + List<String> partitionNames = Lists.newArrayList(); + for (PartitionField field : spec.fields()) { + if (field.name().startsWith("_") || field.name().startsWith(".")) { + partitionNames.add(field.name()); + } + } Review comment: Would it make sense to move this function into `PartitionAwareHiddenPathFilter` as a static function? There might be serialization reasons that it can't, but I think it makes more sense for this code to live nearby that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org