[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814043#comment-16814043
 ] 

Enrico Olivelli commented on ZOOKEEPER-3356:
--------------------------------------------

Can this be a blocker for 3.5.5?l as 'stable'?
I think this affects 3.5.5 and not 3.5.4

> Request throttling in Netty is not working as expected and could cause direct 
> buffer OOM issue 
> -----------------------------------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-3356
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3356
>             Project: ZooKeeper
>          Issue Type: Bug
>          Components: server
>    Affects Versions: 3.5.4, 3.6.0
>            Reporter: Fangmin Lv
>            Assignee: Fangmin Lv
>            Priority: Major
>             Fix For: 3.6.0
>
>
> The current implementation of Netty enable/disable recv logic may cause the 
> direct buffer OOM because we may enable read a large chunk of packets and 
> disabled again after consuming a single ZK request. We have seen this problem 
> on prod occasionally.
>  
> Need a more advanced flow control in Netty instead of using AUTO_READ. Have 
> improved it internally by enable/disable recv based on the queuedBuffer size, 
> will upstream this soon.
>  
> With this implementation, the max Netty queued buffer size (direct memory 
> usage) will be 2 * recv_buffer size. It's not the per message size because in 
> epoll ET mode it will try to read until the socket is empty, and because of 
> SslHandler will trigger another read when it's not a full encrypt packet and 
> haven't issued any decrypt message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to