Re: kafka “stops working” after a large message is enqueued

2016-02-05 Thread Ewen Cheslack-Postava
The default max message size is 1MB. You'll probably need to increase a few settings -- the topic max message size on a per-topic basis on the broker (or broker-wide with message.max.bytes), the max.partition.fetch.bytes on the new consumer, etc. You need to make sure all of the producer, broker,

Re: kafka “stops working” after a large message is enqueued

2016-02-03 Thread Tech Bolek
Deleted the topic and recreated (with max bytes set) but that did not help.What helped though is upping the java heap size.I monitored the consumer with jstat. I noticed 2 full garbage collection attempts right after publishing the large message. After that the consumer appeared dormant. Upping

Re: kafka “stops working” after a large message is enqueued

2016-02-02 Thread Joe Lawson
Make sure the topic is created after message Max bytes is set. On Feb 2, 2016 9:04 PM, "Tech Bolek" wrote: > I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With > messages ~70 KB everything works fine. However, after the producer enqueues > a

kafka “stops working” after a large message is enqueued

2016-02-02 Thread Tech Bolek
I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With messages ~70 KB everything works fine. However, after the producer enqueues a larger, 70 MB  message, kafka appears to stop delivering the messages to the consumer. I.e. not only is the large message not delivered but also