I think the spout still checks once every topology.message.timeout.secs,
which means a tuple will time out between topology.message.timeout.secs and
2*topology.message.timeout.secs after being emitted.
The spout times out tuples by putting emitted message ids in a rotating map
with 2 buckets. A newly emitted message id is failed on the spout once the
map has rotated twice. See
The map rotates when the spout receives a tick tuple, which it does every
I think it is expected that you will see an average latency of 1.5x
topology.message.timeout.secs, because when tuples get added to the pending
map evenly, you will on average be 0.5x topology.message.timeout.secs into
the bucket's time as "bucket number 1" when adding new tuples, and the
bucket survives another 1x topology.message.timeout.secs after that.
2017-08-10 15:36 GMT+02:00 Bobby Evans <ev...@yahoo-inc.com>:
> What version of storm are you using? In older versions of storm the
> timeout check was done once every topology.message.timeout.secs. So that
> means nothing will timeout sooner than topology.message.timeout.secs, but
> could in the worst case be almost 2x that. If I remember correctly that in
> newer versions of storm we have adjusted it to check more frequently, but I
> don't know the JIRA off the top of my head.
> - Bobby
> On Thursday, August 10, 2017, 8:06:51 AM CDT, preethini v <
> preethin...@gmail.com> wrote:
> I have a situation where the bolts ack, but the acker tasks fail (which is
> expected as per my logic).
> I am measuring the latency of the topology using timestamps in ack() and
> fail() methods.
> *ack() - latency ~ 100ms*
> *fail() - latency ~ 15000ms. *
> I have set *topology.message.timeout.secs to 10*.
> Which means there is a timeout of 10s before fail is called. But, 15000 -
> 10000 = 5000ms (which is still a large value).
> *1. What are the reasons for such high latency before calling fail() ?*
> *2. What other time factors contribute to latency apart from timeout? Any