Github user revans2 commented on the pull request:

    https://github.com/apache/storm/pull/311#issuecomment-63495857
  
    @ptgoetz I traced this down in the netty code to the second parameter to 
the call to bind in the socket.
    
    
https://docs.oracle.com/javase/7/docs/api/java/net/ServerSocket.html#bind%28java.net.SocketAddress,%20int%29
    
    It sets the maximum incoming connection queue.  That way if the boss thread 
is unable to keep up with accepting new connections the OS will keep them 
buffered for a while until it can get to them.  This should only be an issue 
when lots of connections are being established very quickly, which would only 
happen for very large topologies.
    
    @caofangkun it would be good to add a better explanation for the config to 
the documentation for the config.  I would also like it if we could rename the 
config to something like ```storm.messaging.netty.socket.backlog``` as I can 
see another backlog being created in the future for tuples, instead of TCP 
connection requests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to