Hi, Kuzmenko:

250 is a small number, when the bolt can not process the tuple in time, the 
spout will stop emitting. But the kafka cosumer coodinator has a timeout 
parameter, like 30s, if the spout has not fetched message from kafka in 30s, 
this spout as a consumer will be kicked out by consumer coodinator thinking the 
consumer is dead. So, even the bolt processed the pending tuple, the spout can 
not  get message anymore. You can assure this by checking the worker log.



Josh
 
From: Igor Kuzmenko
Date: 2017-01-24 17:57
To: user
Subject: Re: Kafka spout stops emmiting messages
Thanks for reply, Josh.
My maxUncommitedOffset was 250, increasing uncommited offset helped me, but I 
still don't understand why spout completly stoped emiting tuples. You said, 
that eventualy spout will produce new tuples, after old one will be acked, but 
in my case didn't.




On Tue, Jan 24, 2017 at 4:24 AM, [email protected] 
<[email protected]> wrote:
Hi, Kuzmenko:

please pay attention to the number about setMaxUncommittedOffsets, if this 
number is too small, the spout may stop  emitting until the pending tuple is 
acked by the down bolt. You can change the number to a large number.



Josh

From: Igor Kuzmenko
Date: 2017-01-24 02:28
To: user
Subject: Kafka spout stops emmiting messages
Hello, I'm trying to upgrade my topology from old kafka spout (storm-kafka 
project) to new one (storm-kafka-client) version 1.0.1. I've configured new 
spout to work with my topology. After deploy it processes and acks few hundreds 
of tuples and then stops. Kafka topic definitely have new messages, and in 
storm UI I can see  kafka spout lag increasing. What could be the problem?

Reply via email to