[
https://issues.apache.org/jira/browse/STORM-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14305112#comment-14305112
]
ASF GitHub Bot commented on STORM-329:
--------------------------------------
Github user tedxia commented on the pull request:
https://github.com/apache/storm/pull/268#issuecomment-72860304
@miguno I test this on my cluster just now.
The topology is more simple "spout ----> bolt0 ---->bolt1", and each
component only have one executor;
I did't met the situation what you said, first I kill bolt1, and after 2
second bolt0 know bolt1 died and close the connection. And then after 31s,
bolt0 start connect to another worker that contain bolt1.
After bolt0 connect to new bolt1, new bolt1 receive immediately, I see this
through ui acked number;
1 First I kill bolt1 at 21:48:07;
2 bolt0 know bolt1 died at 21:48:08
```
2015-02-04 21:48:08 b.s.m.n.StormClientErrorHandler [INFO] Connection
failed Netty-Client-lg-hadoop-tst-st04.bj/10.2.201.70:42813
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_37]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
~[na:1.6.0_37]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
~[na:1.6.0_37]
at sun.nio.ch.IOUtil.read(IOUtil.java:166) ~[na:1.6.0_37]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:245)
~[na:1.6.0_37]
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:322)
~[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
~[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
~[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
[netty-3.2.2.Final.jar:na]
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[na:1.6.0_37]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[na:1.6.0_37]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_37]
2015-02-04 21:48:08 b.s.m.n.Client [INFO] failed to send requests to
lg-hadoop-tst-st04.bj/10.2.201.70:42813:
java.nio.channels.ClosedChannelException: null
at
org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:629)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:605)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:356)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
[netty-3.2.2.Final.jar:na]
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[na:1.6.0_37]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[na:1.6.0_37]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_37]
```
Then bolt0 start reconnect to bolt1, and stop send message to bolt1,
```
2015-02-04 21:48:08 b.s.m.n.Client [ERROR] The Connection channel currently
is not available, dropping pending 1 messages...
2015-02-04 21:48:08 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-lg-hadoop-tst-st04.bj/10.2.201.70:42813... [0]
2015-02-04 21:48:08 b.s.m.n.Client [INFO] failed to send requests to
lg-hadoop-tst-st04.bj/10.2.201.70:42813:
java.nio.channels.ClosedChannelException: null
at
org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:629)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:605)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:356)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[netty-3.2.2.Final.jar:na]
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
[netty-3.2.2.Final.jar:na]
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[na:1.6.0_37]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[na:1.6.0_37]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_37]
```
After reconnect 30 times, bolt0 close this connection
```
2015-02-04 21:48:45 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-lg-hadoop-tst-st04.bj/10.2.201.70:42813... [29]
2015-02-04 21:48:48 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-lg-hadoop-tst-st04.bj/10.2.201.70:42813... [30]
2015-02-04 21:48:49 b.s.m.n.Client [INFO] Closing Netty Client
Netty-Client-lg-hadoop-tst-st04.bj/10.2.201.70:42813
```
3 bolt start connect to new bolt1 at 21:48:49
```
2015-02-04 21:48:49 b.s.m.n.Client [INFO] New Netty Client, connect to
lg-hadoop-tst-st01.bj, 42811, config: , buffer_size: 5242880
2015-02-04 21:48:49 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-lg-hadoop-tst-st01.bj/10.2.201.65:42811... [0]
2015-02-04 21:48:49 b.s.m.n.Client [INFO] connection established to a
remote host Netty-Client-lg-hadoop-tst-st01.bj/10.2.201.65:42
811, [id: 0x65616864, /10.2.201.68:58243 =>
lg-hadoop-tst-st01.bj/10.2.201.65:42811]
```
@miguno If you give me your storm code after merged, I will very glad to
test again.
> Add Option to Config Message handling strategy when connection timeout
> ----------------------------------------------------------------------
>
> Key: STORM-329
> URL: https://issues.apache.org/jira/browse/STORM-329
> Project: Apache Storm
> Issue Type: Improvement
> Affects Versions: 0.9.2-incubating
> Reporter: Sean Zhong
> Priority: Minor
> Labels: Netty
> Attachments: storm-329.patch, worker-kill-recover3.jpg
>
>
> This is to address a [concern brought
> up|https://github.com/apache/incubator-storm/pull/103#issuecomment-43632986]
> during the work at STORM-297:
> {quote}
> [~revans2] wrote: Your logic makes since to me on why these calls are
> blocking. My biggest concern around the blocking is in the case of a worker
> crashing. If a single worker crashes this can block the entire topology from
> executing until that worker comes back up. In some cases I can see that being
> something that you would want. In other cases I can see speed being the
> primary concern and some users would like to get partial data fast, rather
> then accurate data later.
> Could we make it configurable on a follow up JIRA where we can have a max
> limit to the buffering that is allowed, before we block, or throw data away
> (which is what zeromq does)?
> {quote}
> If some worker crash suddenly, how to handle the message which was supposed
> to be delivered to the worker?
> 1. Should we buffer all message infinitely?
> 2. Should we block the message sending until the connection is resumed?
> 3. Should we config a buffer limit, try to buffer the message first, if the
> limit is met, then block?
> 4. Should we neither block, nor buffer too much, but choose to drop the
> messages, and use the built-in storm failover mechanism?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)