This is an automated email from the ASF dual-hosted git repository.

yong pushed a commit to branch branch-4.15
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit dd694c3951a6deb5be9e1e480a9c9e66ae1b54fd
Author: wenbingshen <[email protected]>
AuthorDate: Sat Apr 22 10:23:52 2023 +0800

    Enable PCBC completionObjects autoShrink to reduce memory usage and gc 
(#3913)
    
    ### Motivation
    
    PerChannelBookieClient completionObjects occupy a lot of heap space and 
cannot be recycled.
    The figure below shows that the internal table array of 
ConcurrentOpenHashMap has used space size=0, but the array length is still 
16384, and the memory overhead is 65552bytes.
    
![image](https://user-images.githubusercontent.com/35599757/231114802-db90c49b-d295-46d7-b7db-785035b341f0.png)
    
    
![image](https://user-images.githubusercontent.com/35599757/231113930-bd9f3f54-9052-4c0b-9a3f-2fc493632e35.png)
    
    ConcurrentOpenHashMap default DefaultConcurrencyLevel=16. We have hundreds 
of bookie nodes. Due to the feature of bookie polling and writing, the client 
and server have long connection characteristics, which will as a result, the 
memory usage of about 65552 * 16 * 1776 = 1.74GB cannot be recycled, and the 
space take up by these tables is all size=0 (The broker's owner topic has 
drifted to other brokers due to Full GC).
    
![image](https://user-images.githubusercontent.com/35599757/231117087-08c80320-fa71-49c2-a199-cfee3d83ddc5.png)
    
    When the throughput of the pulsar cluster increases and the bookie cluster 
expands, these memory usage will also increase. Coupled with the unreasonable 
memory usage in other aspects of pulsar that we know, this will cause the 
pulsar broker to continuously generate full gc.
    
    ### Changes
    I think adding autoShrink to completionObjects can reduce this part of 
memory usage and reduce the frequency of Full GC.
    
    (cherry picked from commit ca33b31d3918d04bee63fba54b81b531709a6536)
---
 .../main/java/org/apache/bookkeeper/proto/PerChannelBookieClient.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/PerChannelBookieClient.java
 
b/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/PerChannelBookieClient.java
index 2cda0d243b..06b1517874 100644
--- 
a/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/PerChannelBookieClient.java
+++ 
b/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/PerChannelBookieClient.java
@@ -182,7 +182,7 @@ public class PerChannelBookieClient extends 
ChannelInboundHandlerAdapter {
     final int startTLSTimeout;
 
     private final ConcurrentOpenHashMap<CompletionKey, CompletionValue> 
completionObjects =
-            ConcurrentOpenHashMap.<CompletionKey, 
CompletionValue>newBuilder().build();
+            ConcurrentOpenHashMap.<CompletionKey, 
CompletionValue>newBuilder().autoShrink(true).build();
 
     // Map that hold duplicated read requests. The idea is to only use this 
map (synchronized) when there is a duplicate
     // read request for the same ledgerId/entryId

Reply via email to