Wondering if there is a message size issue which is blocking it to transfer
the data either from kafka to spout or any of the below params from spout
to bolts
config.put(Config.TOPOLOGY_RECEIVER_BUFFER_SIZE,32);
config.put(Config.TOPOLOGY_TRANSFER_BUFFER_SIZE,64);
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE,16384);
config.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE,16384);
I have seen this in my dev environment and I could do some hack to clear it
up which is not applicable in production
May be some one who is running this for a while in prod can help
Regards
Sai
On Tue, Oct 21, 2014 at 2:08 PM, Vladi Feigin <[email protected]> wrote:
> One of two workers doesn't read the data from kafka at all. In this worker
> all succeeding bolts show 0 in emitted / transferred in UI
> This happens after a few hours of successful running. We don't use acks in
> this topology
> Vladi
>
> On Tue, Oct 21, 2014 at 11:55 PM, saiprasad mishra <
> [email protected]> wrote:
>
>> Is the topology not reading from kafka at all and not marking the offset
>> at all.
>> Recently I ran into similar issue which was because of buggy code in my
>> toology, where the ack was happening 2 times instead of once, one in the
>> tey block and one in the finally block inside the execute method. This was
>> causing the messages in pending state and spout was not emitting as the
>> pending messages were not completed. Wondering if you have similar issues
>> in your code
>>
>> On Tue, Oct 21, 2014 at 1:50 PM, Vladi Feigin <[email protected]> wrote:
>>
>>> H All,
>>>
>>> We're experiencing very strange topology behavior when after a few days,
>>> sometimes hours (looks like during a peak load) the spouts get stuck.
>>> The data stops streaming and we lose a lot of data.
>>> We read the data from kafka (use kafka spout). Storm version 0.8.2
>>> Does someone have something similar? What can be a problem?
>>>
>>> Appreciate any help you can provide!
>>> Thank you in advance
>>> Vladi
>>>
>>>
>>
>