This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
     new 8f4494b66 [#2105] fix(server): Fix memory leak caused by duplicate 
blocks when using ShuffleBufferWithSkipList. (#2141)
8f4494b66 is described below

commit 8f4494b66ce0c125b2b3fe5e59141949908e0877
Author: zhengchenyu <[email protected]>
AuthorDate: Wed Sep 25 15:18:55 2024 +0800

    [#2105] fix(server): Fix memory leak caused by duplicate blocks when using 
ShuffleBufferWithSkipList. (#2141)
    
    ### What changes were proposed in this pull request?
    
    Release duplicate block.  #2016 solve this problem in client side. I think 
sendShuffeData may be retried, duplicated block still be reproduce.
    
    ### Why are the changes needed?
    
    Fix: #2105
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    test in cluster.
---
 .../uniffle/server/buffer/ShuffleBufferWithSkipList.java     | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git 
a/server/src/main/java/org/apache/uniffle/server/buffer/ShuffleBufferWithSkipList.java
 
b/server/src/main/java/org/apache/uniffle/server/buffer/ShuffleBufferWithSkipList.java
index 5d91e9e01..512c6c0fe 100644
--- 
a/server/src/main/java/org/apache/uniffle/server/buffer/ShuffleBufferWithSkipList.java
+++ 
b/server/src/main/java/org/apache/uniffle/server/buffer/ShuffleBufferWithSkipList.java
@@ -62,9 +62,15 @@ public class ShuffleBufferWithSkipList extends 
AbstractShuffleBuffer {
 
     synchronized (this) {
       for (ShufflePartitionedBlock block : data.getBlockList()) {
-        blocksMap.put(block.getBlockId(), block);
-        blockCount++;
-        size += block.getSize();
+        // If sendShuffleData retried, we may receive duplicate block. The 
duplicate
+        // block would gc without release. Here we must release the duplicated 
block.
+        if (!blocksMap.containsKey(block.getBlockId())) {
+          blocksMap.put(block.getBlockId(), block);
+          blockCount++;
+          size += block.getSize();
+        } else {
+          block.getData().release();
+        }
       }
       this.size += size;
     }

Reply via email to