otterc commented on a change in pull request #30062:
URL: https://github.com/apache/spark/pull/30062#discussion_r507985489
##########
File path:
common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
##########
@@ -363,4 +363,26 @@ public boolean useOldFetchProtocol() {
return conf.getBoolean("spark.shuffle.useOldFetchProtocol", false);
}
+ /**
+ * The minimum size of a chunk when dividing a merged shuffle file into
multiple chunks during
+ * push-based shuffle.
+ * A merged shuffle file consists of multiple small shuffle blocks. Fetching
the
+ * complete merged shuffle file in a single response increases the memory
requirements for the
Review comment:
Are you referring to the configuration `maxRemoteBlockSizeFetchToMem`?
We are aware that when this configuration is set and if a request is larger
than this, the block will be saved to disk.
With push-based shuffle, data of a remote merged block is always large. If
we don't divide it into chunks, the remote merged data will always be written
to disk and then read from it again. This adds a lot more time.
Also any failure during fetching an entire merged block will be much more
costly. With the approach of dividing a merged block into size-able chunks
- We don't have to write to the disk always so the runtime of jobs are
shorter.
- When fetch of a shuffle chunk fails, then we fallback to the original
blocks corresponding to the mapIds which are part of this chunk.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]