[ 
https://issues.apache.org/jira/browse/STORM-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176524#comment-14176524
 ] 

ASF GitHub Bot commented on STORM-329:
--------------------------------------

Github user clockfly commented on the pull request:

    https://github.com/apache/storm/pull/268#issuecomment-59674932
  
    To retain the comments, I modified ted's branch. 
    
    The update will tries to slove these three problems:
    1. When target worker is down, the source worker should be aware of this 
and should not crash.
    The approach we use here is as follows:
       a. when target worker is down
       b. the source worker will begin reconnect (It can retry upto 
"storm.messaging.netty.max_retries" times)
      c. source worker know that target worker is down(by watching zookeeper 
nodes)
      d. source worker call target client.close(), client.close() now will set 
the flag closing as true asyncly, and this will break the reconnection at step 
b. 
      e. reconnection aborted. No exception thrown.
    
    2. When target worker is down, the data sending to other target worker 
should not be blocked.
    The approach we currently using is to drop messages when connection to 
target worker is not available.
    
    3. There is a side effect of solution in 2. During the course of topology 
setup, will we drop the messages if some target worker connection is 
establishing?
    For this problem, we will now wait the connection to be ready, before 
bringing up spout/bolts during worker startup. So, during the worker start, the 
spout/bolts are activated till the connections to all target worker is 
established.
    
    
      
    



> Add Option to Config Message handling strategy when connection timeout
> ----------------------------------------------------------------------
>
>                 Key: STORM-329
>                 URL: https://issues.apache.org/jira/browse/STORM-329
>             Project: Apache Storm
>          Issue Type: Improvement
>    Affects Versions: 0.9.2-incubating
>            Reporter: Sean Zhong
>            Priority: Minor
>              Labels: Netty
>             Fix For: 0.9.2-incubating
>
>         Attachments: storm-329.patch
>
>
> This is to address a [concern brought 
> up|https://github.com/apache/incubator-storm/pull/103#issuecomment-43632986] 
> during the work at STORM-297:
> {quote}
> [~revans2] wrote: Your logic makes since to me on why these calls are 
> blocking. My biggest concern around the blocking is in the case of a worker 
> crashing. If a single worker crashes this can block the entire topology from 
> executing until that worker comes back up. In some cases I can see that being 
> something that you would want. In other cases I can see speed being the 
> primary concern and some users would like to get partial data fast, rather 
> then accurate data later.
> Could we make it configurable on a follow up JIRA where we can have a max 
> limit to the buffering that is allowed, before we block, or throw data away 
> (which is what zeromq does)?
> {quote}
> If some worker crash suddenly, how to handle the message which was supposed 
> to be delivered to the worker?
> 1. Should we buffer all message infinitely?
> 2. Should we block the message sending until the connection is resumed?
> 3. Should we config a buffer limit, try to buffer the message first, if the 
> limit is met, then block?
> 4. Should we neither block, nor buffer too much, but choose to drop the 
> messages, and use the built-in storm failover mechanism? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to