[ 
https://issues.apache.org/jira/browse/STORM-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193671#comment-14193671
 ] 

ASF GitHub Bot commented on STORM-329:
--------------------------------------

Github user clockfly commented on the pull request:

    https://github.com/apache/storm/pull/268#issuecomment-61393103
  
    High Availability test
    ===============
    
    test scenario: 4 machine A,B,C,D, 4 worker, 1 worker on each machine
    
    test case1(STORM-404): on machine A, kill worker. A will create a new 
worker taking the same port.
    ------------
    expected result: reconnection will succeed.
    
    experiment result:
    other worker will start to reconnect, eventually it succeed. Because A 
starts a new worker with same port.
    ```
    2014-11-02T09:31:24.988+0800 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-IDHV22-04/192.168.1.54:6703... [84]
    2014-11-02T09:31:25.498+0800 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-IDHV22-04/192.168.1.54:6703... [85]
    2014-11-02T09:31:25.498+0800 b.s.m.n.Client [INFO] connection established 
to a remote host Netty-Client-IDHV22-04/192.168.1.54:6703, [id: 0x54466bab, 
/192.168.1.51:51336 => IDHV22-04/192.168.1.54:6703]
    ```
    
    test case2(STORM-404): on machine A, kill worker, then immediately start a 
process to occupy the port used by the worker, which will force storm to 
relocate the worker to a new port(or a new machine.)
    --------------
    expected result: reconnection process will fail, becasue storm relocate the 
worker to a new port.
    
    Actual result:
    First after many reconnecton try, the reconnection is aborted, no exception 
thrown
    ```
    2014-11-02T09:31:14.753+0800 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-IDHV22-04/192.168.1.54:6703... [63]
    2014-11-02T09:31:18.065+0800 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-IDHV22-04/192.168.1.54:6703... [70]
            at 
org.apache.storm.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
 ~[storm-core-0.9.3-rc2-SNAPSHOT.jar:0.9.3-rc2-SNAPSHOT]
            ...
    2014-11-02T09:45:36.209+0800 b.s.m.n.Client [INFO] Waiting for pending 
batchs to be sent with Netty-Client-IDHV22-04/192.168.1.54:6703..., timeout: 
600000ms, pendings: 0
    2014-11-02T09:45:36.209+0800 b.s.m.n.Client [INFO] connection is closing, 
abort reconnecting...
    ```
    
    Second, a new connection to new worker(with new port, or on another machine)
    
    (previous the worker is at IDHV22-04:6703, then relocate to IDHV22-03:6702)
    ```
    2014-11-02T09:45:36.206+0800 b.s.m.n.Client [INFO] New Netty Client, 
connect to IDHV22-03, 6702, config: , buffer_size: 5242880
    2014-11-02T09:45:36.207+0800 b.s.m.n.Client [INFO] connection established 
to a remote host Netty-Client-IDHV22-03/192.168.1.53:6702, [id: 0x538fdacb, 
/192.168.1.51:56047 => IDHV22-03/192.168.1.53:6702]
    ```
    
    test case3: check the failed message count before and after the worker crash
    ----------------
    expect result: after the worker crash, there will some message loss. After 
it stablize, the message loss will not increase.
    
    Actual result: meet expectation.
    
    
    test case4: check the throughput change before and after the worker crash
    --------------
    expect result: There should be no performance drop.
    
    Actual result: 
    
    When storm start a new worker on same machine, there is no performance drop.
    Check the first gap in the following image.
    
    ![network bandwidth change before and after worker 
crash](https://issues.apache.org/jira/secure/attachment/12678758/worker-kill-recover3.jpg)
    
    When storm start a new worker on different machine. It may impact the 
parallism. Check the second gap in above picture. Before worker crash, there 
are 4 worker on 4 machine. After worker crash, there are 3 worker on 4 machine. 
The parallism drops, so the throughput drops. 
    
    test case5(STORM-510): when a target worker crash, the message sending to 
other workers should not be blocked.
    
    expect result: One connection should not block another in the case of 
worker crash.
    
    Actual result: 
    In the code, the blocking logic is removed. So, one connection will not 
block another connection.
    However, in the transition period of failure, because there will be many 
message loss to the crashed worker, the max.spout.pending flow control may 
kicks in, the spout message sending speed will be slower. And overall the max 
throughput will be smaller.
    
    After the transition, it goes back to normal. In my test, the transition 
peroid is around 40second.
    



> Add Option to Config Message handling strategy when connection timeout
> ----------------------------------------------------------------------
>
>                 Key: STORM-329
>                 URL: https://issues.apache.org/jira/browse/STORM-329
>             Project: Apache Storm
>          Issue Type: Improvement
>    Affects Versions: 0.9.2-incubating
>            Reporter: Sean Zhong
>            Priority: Minor
>              Labels: Netty
>             Fix For: 0.9.2-incubating
>
>         Attachments: storm-329.patch, worker-kill-recover3.jpg
>
>
> This is to address a [concern brought 
> up|https://github.com/apache/incubator-storm/pull/103#issuecomment-43632986] 
> during the work at STORM-297:
> {quote}
> [~revans2] wrote: Your logic makes since to me on why these calls are 
> blocking. My biggest concern around the blocking is in the case of a worker 
> crashing. If a single worker crashes this can block the entire topology from 
> executing until that worker comes back up. In some cases I can see that being 
> something that you would want. In other cases I can see speed being the 
> primary concern and some users would like to get partial data fast, rather 
> then accurate data later.
> Could we make it configurable on a follow up JIRA where we can have a max 
> limit to the buffering that is allowed, before we block, or throw data away 
> (which is what zeromq does)?
> {quote}
> If some worker crash suddenly, how to handle the message which was supposed 
> to be delivered to the worker?
> 1. Should we buffer all message infinitely?
> 2. Should we block the message sending until the connection is resumed?
> 3. Should we config a buffer limit, try to buffer the message first, if the 
> limit is met, then block?
> 4. Should we neither block, nor buffer too much, but choose to drop the 
> messages, and use the built-in storm failover mechanism? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to