RussellSpitzer commented on code in PR #4674:
URL: https://github.com/apache/iceberg/pull/4674#discussion_r862967285


##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/BaseDeleteReachableFilesSparkAction.java:
##########
@@ -125,8 +126,9 @@ private Result doExecute() {
 
   private Dataset<Row> buildReachableFileDF(TableMetadata metadata) {
     Table staticTable = newStaticTable(metadata, io);
-    return withFileType(buildValidContentFileDF(staticTable), CONTENT_FILE)
-        .union(withFileType(buildManifestFileDF(staticTable), MANIFEST))
+    Dataset<Row> allManifests = loadMetadataTable(staticTable, ALL_MANIFESTS);
+    return withFileType(buildValidContentFileDF(staticTable, allManifests), 
CONTENT_FILE)
+        .union(withFileType(buildManifestFileDF(allManifests), MANIFEST))

Review Comment:
   Ah yes I should have been more clear, I was referring to the fact that the 
dataset would be recomputed on both lines. The "loadMetadataTable" function 
should be very fast but the actual planning should be expensive and that would 
require a cache of some kind. 
   
   I'm a little worried in general about persisting things since I want to make 
sure we clean up our caches asap.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to