If the storm ui doesn't show the fails being generated from specific bolts,
but instead only listing them as fails on the spout itself, you're looking
at timed out tuples.  I'd try lowering your max spout pending and/or
increasing your timeout value.  I'm not entirely sure how those play in if
you aren't anchoring tuples however.



On Wed, Oct 25, 2017 at 2:32 AM, Ambud Sharma <[email protected]>
wrote:

> Without anchoring at least once semantics is not honored, i.e. if event is
> lost Kafka spout doesn't replay it.
>
> On Oct 1, 2017 6:12 AM, "Yovav Waichman" <[email protected]>
> wrote:
>
>> Hi,
>>
>>
>>
>> We are using Apache Storm for a couple of years, and everything was fine
>> till now.
>>
>> For our spout we are using “storm-kafka-0.9.4.jar”.
>>
>>
>>
>> Lately, we started seeing that our “Failed” number of events has
>> increased dramatically, and currently almost 20% of our total events are
>> marked as Failed.
>>
>>
>>
>> We tried investigating our Topology logs, but we came up empty handed.
>> Also checking our DB logs didn’t give us any clue as for heavy load on our
>> system.
>>
>> Moreover, our spout complete latency is 25.996 ms, which overruled any
>> timeouts that might occur.
>>
>>
>>
>> Lowering our max pending value has produced a negative result.
>>
>> At some point, since we are not using anchoring, we thought about adding
>> anchoring, but we saw that the KafkaSpout handles failures by replaying
>> them, so we were not sure whether to add it or not.
>>
>>
>>
>> It would be helpful if you can direct us as to where we can find in Storm
>> logs the reason for these failures, if there’s an exception which is not
>> caught, maybe a time out, since we are a bit blind at the moment.
>>
>>
>>
>> We would appreciate any help with that.
>>
>>
>>
>> Thanks in advance,
>>
>> Yovav
>>
>

Reply via email to