[ 
https://issues.apache.org/jira/browse/STORM-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14187966#comment-14187966
 ] 

ASF GitHub Bot commented on STORM-329:
--------------------------------------

Github user clockfly commented on the pull request:

    https://github.com/apache/storm/pull/268#issuecomment-60870953
  
    @tedxia 
    
    I got a chance to chat with Ted online. In summary, he is descrbing the 
following case (worker A -> worker B):
    1. B dies
    2. after zk session timeout, zk knows B is dead
    3. A is initiating the reconnection process to B. By default, it will retry 
300 times at max.(it should be larger than 120second, based on the comments in 
config) “ ``` # Since nimbus.task.launch.secs and 
supervisor.worker.start.timeout.secs are 120, other workers  should also wait 
at least that long before giving up on connecting to the other worker.```”
    4. zk is under heavy load(consider a zk tree which have 100 thoudsands 
nodes, and many many watchers), may take minutes to notify A that B is dead.
    5. A didn't get notification from zk in time after 300 connection retries, 
reconnection failed, it throws, which will cause the worker to exit.
    
    Basically there are two questions asked. First, whether we can assure the 
zookeeper is responsive(< 1minute). Second, If worker doesn't get update of B 
from zookeeper after 300 reconnection retries, should we exit the worker or let 
worker continues to work?
    
    
    
    



> Add Option to Config Message handling strategy when connection timeout
> ----------------------------------------------------------------------
>
>                 Key: STORM-329
>                 URL: https://issues.apache.org/jira/browse/STORM-329
>             Project: Apache Storm
>          Issue Type: Improvement
>    Affects Versions: 0.9.2-incubating
>            Reporter: Sean Zhong
>            Priority: Minor
>              Labels: Netty
>             Fix For: 0.9.2-incubating
>
>         Attachments: storm-329.patch
>
>
> This is to address a [concern brought 
> up|https://github.com/apache/incubator-storm/pull/103#issuecomment-43632986] 
> during the work at STORM-297:
> {quote}
> [~revans2] wrote: Your logic makes since to me on why these calls are 
> blocking. My biggest concern around the blocking is in the case of a worker 
> crashing. If a single worker crashes this can block the entire topology from 
> executing until that worker comes back up. In some cases I can see that being 
> something that you would want. In other cases I can see speed being the 
> primary concern and some users would like to get partial data fast, rather 
> then accurate data later.
> Could we make it configurable on a follow up JIRA where we can have a max 
> limit to the buffering that is allowed, before we block, or throw data away 
> (which is what zeromq does)?
> {quote}
> If some worker crash suddenly, how to handle the message which was supposed 
> to be delivered to the worker?
> 1. Should we buffer all message infinitely?
> 2. Should we block the message sending until the connection is resumed?
> 3. Should we config a buffer limit, try to buffer the message first, if the 
> limit is met, then block?
> 4. Should we neither block, nor buffer too much, but choose to drop the 
> messages, and use the built-in storm failover mechanism? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to