Vipul Singh created KAFKA-4762:
----------------------------------

             Summary: Consumer throwing RecordTooLargeException even when 
messages are not that large
                 Key: KAFKA-4762
                 URL: https://issues.apache.org/jira/browse/KAFKA-4762
             Project: Kafka
          Issue Type: Bug
            Reporter: Vipul Singh


We were just recently hit by a weird error. 
Before going in any further, explaining of our service setup. we have a 
producer which produces messages not larger than 256 kb of messages( we have an 
explicit check about this on the producer side) and on the client side we have 
a fetch limit of 512mb(max.partition.fetch.bytes is set to 524288 bytes) 

Recently our client started to see this error:

{quote}
org.apache.kafka.common.errors.RecordTooLargeException: There are some messages 
at [Partition=Offset]: {topic_name-0=9925080405} whose size is larger than the 
fetch size 524288 and hence cannot be ever returned. Increase the fetch size, 
or decrease the maximum message size the broker will allow.
{quote}

We tried consuming messages with another consumer, without any 
max.partition.fetch.bytes limit, and it consumed fine. The messages were small, 
and did not seem to be greater than 256 kb

We took a log dump, and the log size looked fine.

Has anyone seen something similar? or any points to troubleshoot this further

Please Note: To overcome this issue, we deployed a new consumer, without this 
limit of max.partition.fetch.bytes, and it worked fine.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to