Adding the snippets from spout and bolt
fullMessage is the SQS message .
Spout
================
spoutOutputCollector.emit(new Values(fullMessage, fullMessage
.getMessageId()),
fullMessage.getReceiptHandle());
DB Bolt
======
outputCollector.emit(tuple, new Values(fullMessage));
outputCollector.ack(tuple);
S3 Bolt
========
outputCollector.emit(tuple, new Values(fullMessage));
outputCollector.ack(tuple);
SQS Delete Bolt
============
outputCollector.ack(tuple);
On Sun, Aug 7, 2016 at 1:06 PM, pradeep s <[email protected]>
wrote:
> Time taken in bolts is very less . Total time of all bolts will come to
> less than 5 seconds only.
> Do i need to override the max spout pending param.
> If i process for small load, i dont see any issue.
>
> On Sat, Aug 6, 2016 at 11:20 AM, Andrew Montalenti <[email protected]>
> wrote:
>
>> Most commonly this is because your downstream components are failing to
>> ack your tuples. If a tuple goes unack'ed for the topology timeout period,
>> then it will fail at the spout.
>>
>> To debug this, make sure you get an ack for every tuple emitted from the
>> spout at every bolt processing stage. Also, you may want to double-check
>> your tuple anchor'ing -- this can sometimes be caused by improperly
>> anchored or unanchored tuples, as well.
>>
>> --
>> Andrew Montalenti | CTO, Parse.ly
>>
>> On Sat, Aug 6, 2016 at 1:22 PM, pradeep s <[email protected]>
>> wrote:
>>
>>> Hi ,
>>> I am having a topology which reads messages from Amazon SQS and and
>>> there are three bolts .One bolt which writes to a RDS relational DB. Output
>>> of this is sent to a S3 write bolt.After this there is a SQS delete bolt
>>> which deletes the message from SQS.
>>> When i am testing a bigger load, i am seeing spout failures in Storm UI.
>>> But there are no failures in any of the bolts.Also no failure in log
>>> files.
>>> Any suggestion on the reason for spout failure and how to debug this.
>>> Topology timeout is set at default 30 secs.
>>> Thanks
>>> Pradeep S
>>>
>>
>>
>