s0nskar commented on code in PR #2524:
URL: https://github.com/apache/celeborn/pull/2524#discussion_r1611086427
##########
worker/src/main/java/org/apache/celeborn/service/deploy/worker/storage/PartitionFilesSorter.java:
##########
@@ -629,10 +629,18 @@ public void sort() throws InterruptedException {
for (ShuffleBlockInfo blockInfo : originShuffleBlocks) {
long offset = blockInfo.offset;
long length = blockInfo.length;
- ShuffleBlockInfo sortedBlock = new ShuffleBlockInfo();
- sortedBlock.offset = fileIndex;
- sortedBlock.length = length;
- sortedShuffleBlocks.add(sortedBlock);
+ // combine multiple small length `ShuffleBlockInfo` for same mapId
such that
+ // size of compacted `ShuffleBlockInfo` does not exceed
`shuffleChunkSize`
+ if (!sortedShuffleBlocks.isEmpty()
+ && sortedShuffleBlocks.get(sortedShuffleBlocks.size() -
1).length + length
+ <= shuffleChunkSize) {
Review Comment:
yeah, that sounds good. I can make this threshold configurable with default
value of 0.25 to start with.
Although i'm not super clear about the issue in the discussion. Why would
fetch data will return two or three chunks together making it 7.9 * 2 = 15.8m
or 3.9 * 3 = 11.7 as mentioned above. Do clients cache the initial read chunk
while reading the next one. If that is the case, can someone point me to this
piece of code.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]