[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685848#comment-15685848
 ] 

huxi commented on KAFKA-4430:
-----------------------------

Check broker config 'message.max.bytes' for Aggregate Kafka cluster. The 
default value is 1000012 bytes which is less than 1MB. Try to increase this 
value to see if it works.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-4430
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4430
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.9.0.1
>         Environment: Production 
>            Reporter: Srinivas Dhruvakumar
>              Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to