[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15696670#comment-15696670
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
---------------------------------------------

I inspected the payloads and looked at the producer code to see what they were 
sending. One more thing what I tested was I set the compression to none on 
Mirrormaker producer setting and this error was no longer reproduced.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-4430
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4430
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.9.0.1
>         Environment: Production 
>            Reporter: Srinivas Dhruvakumar
>              Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to