I have set up topology to load a very large volume of data. Recently I just
loaded about 60K records and found out that there are some failed acks on
few spouts but non on the bolts. Storm completed running and seem to look
stable. Initially i started with a lesser amount of data like about 500
records  successfully and then increased up to 60K where i saw the failed
acks.

Questions:
1. Does that mean that the spout was not able to read some messages from
Kafka? Since there are no failed ack on the bolts as per UI, what ever the
message received has been successfully processed by the bolts.
2. how do i interpret the numbers of failed acks like this acked:315500
 and failed: 2980.
Does this mean that 2980 records failed to be processed? Is this is the
case then, how do I avoid this from happening because I will be loosing
2980 records.
3. I also see that few of the records failed to be inserted into Cassandra
database. What is the best way to reprocess the data again as it is quite
difficult to do it through the batch process that I am currently running.

LMK, thanks.

--
Kushan Maskey
817.403.7500

Reply via email to