vinothchandar commented on a change in pull request #1421: [HUDI-724]
Parallelize getSmallFiles for partitions
URL: https://github.com/apache/incubator-hudi/pull/1421#discussion_r395410562
##########
File path:
hudi-client/src/main/java/org/apache/hudi/table/HoodieCopyOnWriteTable.java
##########
@@ -602,18 +602,39 @@ private int addUpdateBucket(String fileIdHint) {
return bucket;
}
- private void assignInserts(WorkloadProfile profile) {
+ private void assignInserts(WorkloadProfile profile, JavaSparkContext jsc) {
// for new inserts, compute buckets depending on how many records we
have for each partition
Set<String> partitionPaths = profile.getPartitionPaths();
long averageRecordSize =
averageBytesPerRecord(metaClient.getActiveTimeline().getCommitTimeline().filterCompletedInstants(),
config.getCopyOnWriteRecordSizeEstimate());
LOG.info("AvgRecordSize => " + averageRecordSize);
+
+ HashMap<String, List<SmallFile>> partitionSmallFilesMap = new
HashMap<>();
+ if (jsc != null && partitionPaths.size() > 1) {
Review comment:
I think its a reasonable thing to parallelize this.. Listing of cleaning etc
has been parallelized like this before. Should be safe to do.
Also can we pull this block into a method? `getSmallFiles(partitionPaths)`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services