Dear storm experts, I am using storm 0.9.2-incubating and using storm in local mode.
I encountered a weird problem when using storm. My topology is: KafkaSpout -> SplitBolt -> StoreBolt I wrote a test program to emit messages every millisecond. The system works fine with small input, but when I send more than 3000 messages to KafkaSpout, it seems that the bolt stops receiving emitted values from the KafkaSpout, all the logs becomes: 268150 [Thread-25-spout] INFO backtype.storm.daemon.task - Emitting: spout default [14, at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:914)] 268150 [Thread-25-spout] INFO backtype.storm.daemon.task - Emitting: spout __ack_init [-3381955863165195850 3298380998172808616 10] 2014-10-11 10:52:22,974 DEBUG [kafka.consumer.PartitionTopicInfo] - reset consume offset of 14:0: fetched offset = 69009: consumed offset = 69012 to 69012 (consumer)--> at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) 268151 [Thread-25-spout] INFO backtype.storm.daemon.task - Emitting: spout default [14, at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)] 268151 [Thread-25-spout] INFO backtype.storm.daemon.task - Emitting: spout __ack_init [2346958040565628056 -8239789527920435743 10] Only __ack_init coming from KafkaSpout and no __ack_ack from __acker, split or store. However, there is no error messages regarding the status of the bolts. I checked the logs and they just stop emmiting "__ack_ack" at a certain point, and no error message was displayed. I forgot to say one thing last time: after I restarted the Topology, the messages in the last session would be processed, that is, the messages with __ack_init from spout. And my emit code for KafakaSpout is: mCollector.emit(new Values(topicId, value)); Anyone encountered the same problem or had any thoughts on what might be wrong? Thanks, Jerry
