s0nskar commented on code in PR #2524:
URL: https://github.com/apache/celeborn/pull/2524#discussion_r1611609580


##########
worker/src/main/java/org/apache/celeborn/service/deploy/worker/storage/PartitionFilesSorter.java:
##########
@@ -629,10 +629,18 @@ public void sort() throws InterruptedException {
           for (ShuffleBlockInfo blockInfo : originShuffleBlocks) {
             long offset = blockInfo.offset;
             long length = blockInfo.length;
-            ShuffleBlockInfo sortedBlock = new ShuffleBlockInfo();
-            sortedBlock.offset = fileIndex;
-            sortedBlock.length = length;
-            sortedShuffleBlocks.add(sortedBlock);
+            // combine multiple small length `ShuffleBlockInfo` for same mapId 
such that
+            // size of compacted `ShuffleBlockInfo` does not exceed 
`shuffleChunkSize`
+            if (!sortedShuffleBlocks.isEmpty()
+                && sortedShuffleBlocks.get(sortedShuffleBlocks.size() - 
1).length + length
+                    <= shuffleChunkSize) {

Review Comment:
   As per documentation `fetchChunkSize` is "Max chunk size". So shouldn't we 
change above condition to respect it. As you mentioned, if the ShuffleBlocks 
are generated close of 8mb then chunk can be close to 16mb.
   
   We can change this condition to generate offsets closer to fetchChunkSize 
with some error factor of 10 or 20%. wdyt?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to