dlg99 commented on a change in pull request #1088: ISSUE #1086 (@bug 
W-4146427@) Client-side backpressure in netty (Fixes: 
io.netty.util.internal.OutOfDirectMemoryError under continuous heavy load)
URL: https://github.com/apache/bookkeeper/pull/1088#discussion_r173343674
 
 

 ##########
 File path: 
bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/DefaultPerChannelBookieClientPool.java
 ##########
 @@ -84,14 +84,22 @@ public void intialize() {
         }
     }
 
-    @Override
-    public void obtain(GenericCallback<PerChannelBookieClient> callback, long 
key) {
+    private PerChannelBookieClient getClient(long key) {
         if (1 == clients.length) {
-            clients[0].connectIfNeededAndDoOp(callback);
-            return;
+            return clients[0];
         }
         int idx = MathUtils.signSafeMod(key, clients.length);
-        clients[idx].connectIfNeededAndDoOp(callback);
+        return clients[idx];
+    }
+
+    @Override
+    public void obtain(GenericCallback<PerChannelBookieClient> callback, long 
key) {
+        getClient(key).connectIfNeededAndDoOp(callback);
 
 Review comment:
   I'd keep this out of the scope of this change. This will require separate 
set of tests to prove that it helps. 
   I.e. this is only important for num of channels > 0.
   All these channels are to the same bookie so even when there are more than 
one channel and one channel is already blocked most likely bookie is not 
reading fast enough (i.e. hit NIC limit on client or on bookie, or long GC on 
bookie etc).
   All that switching to another channel achieves there is buffering request in 
the client just in another channel's buffer.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to