Well, that depends on how much memory is available to your Kafka
consumer on the machine where it is running.

Thanks,
Neha

On Wed, Dec 21, 2011 at 1:23 PM, S Ahmed <sahmed1...@gmail.com> wrote:
> What would be an upper bound then? i.e. 100K should be ok, what shouldn't?
> :)
>
> On Wed, Dec 21, 2011 at 4:16 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:
>
>> >> Was kafka designed for a specific message size range?
>>
>> Kafka consumer reads a message from the socket into memory. If a
>> message is large enough to cause OutOfMemoryException, then the Kafka
>> consumer is unable to return more messages from the socket byte
>> buffer. This can be fixed by enabling the Kafka consumers to have a
>> 'streaming' API, where such large messages could be read in a
>> piecemeal fashion. But it is tricky and we don't have that feature
>> yet.
>>
>> To avoid your Kafka consumer from getting into a bad state due to a
>> large message, you can set "max.message.size" to the largest possible
>> message size on your producer. Any message larger than that never
>> enters the Kafka cluster and hence never reaches a Kafka consumer.
>>
>> >> > Seeing as it is used to aggregate log messages, is it safe to say
>> message
>> > sizes of 2-100K are reasonable and won't cause any issues?
>>
>> 100K message sizes should work fine.
>>
>> Thanks,
>> Neha
>>
>> On Wed, Dec 21, 2011 at 12:15 PM, S Ahmed <sahmed1...@gmail.com> wrote:
>> > Was kafka designed for a specific message size range?
>> >
>> > Seeing as it is used to aggregate log messages, is it safe to say message
>> > sizes of 2-100K are reasonable and won't cause any issues?
>>

Reply via email to