rickyma commented on code in PR #1759:
URL:
https://github.com/apache/incubator-uniffle/pull/1759#discussion_r1625593921
##########
server/src/main/java/org/apache/uniffle/server/ShuffleServerConf.java:
##########
@@ -387,12 +387,24 @@ public class ShuffleServerConf extends RssBaseConf {
.withDescription(
"Whether single buffer flush when size exceeded
rss.server.single.buffer.flush.threshold");
- public static final ConfigOption<Long> SINGLE_BUFFER_FLUSH_THRESHOLD =
+ public static final ConfigOption<Long> SINGLE_BUFFER_FLUSH_SIZE_THRESHOLD =
ConfigOptions.key("rss.server.single.buffer.flush.threshold")
.longType()
.defaultValue(128 * 1024 * 1024L)
.withDescription("The threshold of single shuffle buffer flush");
+ public static final ConfigOption<Integer>
SINGLE_BUFFER_FLUSH_BLOCKS_NUM_THRESHOLD =
+ ConfigOptions.key("rss.server.single.buffer.flush.blocksNumberThreshold")
+ .intType()
+ .defaultValue(4000)
Review Comment:
> If the threshold is set too high, it becomes meaningless.
We should know that jobs with excessive small blocks are not normal cases.
If the setting is too large, this value is meaningless, as there will still be
many small blocks maintained in the heap memory, which will cause stability
issues for Uniffle servers when running on a large scale with high pressure.
Uniffle should prioritize stability first.
I've tested `4000`, it will not cause any performance fallback. And the
flush size is suitable. If you don't want this enabled by default, we can set
it to `Integer.MAX_VALUE`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]