RussellSpitzer commented on a change in pull request #4377:
URL: https://github.com/apache/iceberg/pull/4377#discussion_r836550321
##########
File path:
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/BaseRewriteDataFilesSparkAction.java
##########
@@ -352,6 +353,70 @@ private Result
doExecuteWithPartialProgress(RewriteExecutionContext ctx, Stream<
return new RewriteFileGroup(info, tasks);
});
});
+
+ return
rewriteFileGroupStream.sorted(rewriteGroupComparator(fileGroupsByPartition));
+ }
+
+ private Comparator<RewriteFileGroup> rewriteGroupComparator(
Review comment:
I think this may be the wrong approach here. There are a few issues
1. We build up a cache of bytes/sizes for every comparisons even though we
only use two elements from it
2. We probably need an enum in there for selecting
3. We can probably run without the cache and make the code a bit cleaner.
4. The code currently sorts partitions as a whole, rather than file groups
(is this the intention?)
My suggestion would be to have the code look a bit more like
```java
if (rewriteJobOrder.equals(SmallestSizeFirst)) {
return Comparator.comparing(RewriteFileGroup::size)) ; // Where you
implement size in the rewriteFileGroup class or numFIles or whatnot
}
```
Unless our goal here is to sort by Partitions as chunks, but that doesn't
make as much sense for me but let me know what you think?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]