advancedxy commented on code in PR #445:
URL: https://github.com/apache/incubator-uniffle/pull/445#discussion_r1056141270
##########
server/src/main/java/org/apache/uniffle/server/buffer/ShuffleBufferManager.java:
##########
@@ -431,20 +436,28 @@ private Map<String, Set<Integer>> pickFlushedShuffle() {
long expectedFlushSize = highWaterMark - lowWaterMark;
long pickedFlushSize = 0L;
int printIndex = 0;
+ int printIgnoreIndex = 0;
int printMax = 10;
for (Map.Entry<String, AtomicLong> entry : sizeList) {
long size = entry.getValue().get();
- pickedFlushSize += size;
String appIdShuffleIdKey = entry.getKey();
- addPickedShuffle(appIdShuffleIdKey, pickedShuffle);
- // print detail picked info
- if (printIndex < printMax) {
- LOG.info("Pick application_shuffleId[{}] with {} bytes",
appIdShuffleIdKey, size);
- printIndex++;
- }
- if (pickedFlushSize > expectedFlushSize) {
- LOG.info("Finish flush pick with {} bytes", pickedFlushSize);
- break;
+ if (size > this.shuffleFlushThreshold) {
Review Comment:
> Yes. Got your thought. But I consider an extreme case, if the all buffers
size are 1M and memory reaches the high-watermark. After that, the later
sending data requests are not the existing buffers' partition, whether the
memory's data will be flushed?
>
> In this case, will dead-lock happen?
If misconfigured, say shuffle memory's high watermark: 1G, then all buffer
sizes are 1M or less than 1M, then yes, it could be possible all data stays in
memory. But like you said, it's an extremely case, and in practice shuffle
data must varies differently.
I don't think we need to consider such cases, there are other ways to deal
such case: lower down the threshold size for lower high watermark. Do you have
other concerns?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]