[
https://issues.apache.org/jira/browse/STORM-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318945#comment-14318945
]
ASF GitHub Bot commented on STORM-329:
--------------------------------------
Github user miguno commented on the pull request:
https://github.com/apache/storm/pull/429#issuecomment-74147926
If you need at-least-once processing you must use an acking topology, which
will allow Storm to replay lost messages. If instead you go with an unacking
topology (= no guaranteed message processing) then you may run into data loss.
There re pros and cons for each variant, and e.g. in our case we use both
depending on the use case.
Also: The semantics described above have been in Storm right from the
beginning. None of these have been changed by this pull request.
> On 12.02.2015, at 20:01, Daniel Schonfeld <[email protected]>
wrote:
>
> Doesn't dropping the messages coming from a non ack/fail caring spout
negate the 'at least once' attempt of storm? I mean doesn't that kinda force
you to make all your spouts ack/fail aware where before you could have gotten
away without it?
>
> In other words. There is a chance that if the worker that died is the one
containing the spout and if the first bolt is located on another worker, that
technically at-least once wasn't tried but rather fell to the floor right away.
>
> —
> Reply to this email directly or view it on GitHub.
>
> Add Option to Config Message handling strategy when connection timeout
> ----------------------------------------------------------------------
>
> Key: STORM-329
> URL: https://issues.apache.org/jira/browse/STORM-329
> Project: Apache Storm
> Issue Type: Improvement
> Affects Versions: 0.9.2-incubating
> Reporter: Sean Zhong
> Priority: Minor
> Labels: Netty
> Attachments: storm-329.patch, worker-kill-recover3.jpg
>
>
> This is to address a [concern brought
> up|https://github.com/apache/incubator-storm/pull/103#issuecomment-43632986]
> during the work at STORM-297:
> {quote}
> [~revans2] wrote: Your logic makes since to me on why these calls are
> blocking. My biggest concern around the blocking is in the case of a worker
> crashing. If a single worker crashes this can block the entire topology from
> executing until that worker comes back up. In some cases I can see that being
> something that you would want. In other cases I can see speed being the
> primary concern and some users would like to get partial data fast, rather
> then accurate data later.
> Could we make it configurable on a follow up JIRA where we can have a max
> limit to the buffering that is allowed, before we block, or throw data away
> (which is what zeromq does)?
> {quote}
> If some worker crash suddenly, how to handle the message which was supposed
> to be delivered to the worker?
> 1. Should we buffer all message infinitely?
> 2. Should we block the message sending until the connection is resumed?
> 3. Should we config a buffer limit, try to buffer the message first, if the
> limit is met, then block?
> 4. Should we neither block, nor buffer too much, but choose to drop the
> messages, and use the built-in storm failover mechanism?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)