Thanks Srinath for your response. I have not looked into configuring the buffers. If you can point me to the direction where I can get more information on it would be helpful. AFAIK the records from bulk load is mostly at constant rate. When I am just doing a message at a time it works fine. Also the failed ack number 2980, does that really mean that many records failed to ack?
-- Kushan Maskey 817.403.7500 On Mon, Aug 25, 2014 at 7:46 PM, Srinath C <[email protected]> wrote: > I would suspect that at some point the rate at which the spouts emitted > exceeded the rate at which the bolts could process. Maybe you could look at > configuring the buffers (if you haven't yet done that). Do your records get > processed at a constant rate? > > > On Tue, Aug 26, 2014 at 4:12 AM, Kushan Maskey < > [email protected]> wrote: > >> I have set up topology to load a very large volume of data. Recently I >> just loaded about 60K records and found out that there are some failed acks >> on few spouts but non on the bolts. Storm completed running and seem to >> look stable. Initially i started with a lesser amount of data like about >> 500 records successfully and then increased up to 60K where i saw the >> failed acks. >> >> Questions: >> 1. Does that mean that the spout was not able to read some messages from >> Kafka? Since there are no failed ack on the bolts as per UI, what ever the >> message received has been successfully processed by the bolts. >> 2. how do i interpret the numbers of failed acks like this acked:315500 >> and failed: 2980. >> Does this mean that 2980 records failed to be processed? Is this is the >> case then, how do I avoid this from happening because I will be loosing >> 2980 records. >> 3. I also see that few of the records failed to be inserted into >> Cassandra database. What is the best way to reprocess the data again as it >> is quite difficult to do it through the batch process that I am currently >> running. >> >> LMK, thanks. >> >> -- >> Kushan Maskey >> 817.403.7500 >> > >
