RussellSpitzer commented on code in PR #4674:
URL: https://github.com/apache/iceberg/pull/4674#discussion_r862989861


##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/BaseDeleteReachableFilesSparkAction.java:
##########
@@ -125,8 +126,9 @@ private Result doExecute() {
 
   private Dataset<Row> buildReachableFileDF(TableMetadata metadata) {
     Table staticTable = newStaticTable(metadata, io);
-    return withFileType(buildValidContentFileDF(staticTable), CONTENT_FILE)
-        .union(withFileType(buildManifestFileDF(staticTable), MANIFEST))
+    Dataset<Row> allManifests = loadMetadataTable(staticTable, ALL_MANIFESTS);
+    return withFileType(buildValidContentFileDF(staticTable, allManifests), 
CONTENT_FILE)
+        .union(withFileType(buildManifestFileDF(allManifests), MANIFEST))

Review Comment:
   @ajantha-bhat The problem is that persisting does another IO serialization 
of what should be basically the same amount of information but hopefully in a 
more readable form. Persist by default is a Memory and Disk based cached. You 
really need to test it out to be sure.
   
   For Cache vs Collect. Cache is probably much better, pieces would be stored 
on executors and the IO would hopefully be mostly local, doing a Collect and 
and building a DF from it would essentially bring all data back to the driver 
serialize, then deserialize and send everything back out. This is always worse 
than cache/persist



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to