[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2017-02-10 Thread Jozef Koval (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861174#comment-15861174
 ] 

Jozef Koval commented on KAFKA-4430:


I think this issue has been resolved.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-25 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15696670#comment-15696670
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

I inspected the payloads and looked at the producer code to see what they were 
sending. One more thing what I tested was I set the compression to none on 
Mirrormaker producer setting and this error was no longer reproduced.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-23 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15692402#comment-15692402
 ] 

huxi commented on KAFKA-4430:
-

Great find. And, why is the message compressed twice and how did you know it?

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-23 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15691773#comment-15691773
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

I figured out the issue. The payload was gzipped. It was getting compressed 
twice and the behavior was erratic. Thanks 

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-23 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15690723#comment-15690723
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

I am a tad bit confused why would message size be bigger than 1MB on the Kafka 
AGG if the mirrormaker batch.size is 500 KB and message.request.size is 1MB ? 
Coz Max.request.size checks for the seriailized message size. and i have set 
the batch size taking compression into factor. In worst case if compression 
factor ->1 the size would still be under 1MB.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15689102#comment-15689102
 ] 

huxi commented on KAFKA-4430:
-

A message larger than batch.size but smaller than max.request.size is 
acceptable as long as it's smaller than message.max.bytes (broker config) or 
max.message.bytes (topic config). In your current configuration, you will not 
see any errors thrown after you set the topic-level max.message.bytes to 2MB.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688708#comment-15688708
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Previous Configuration - . Kafka AGG -> message.max.bytes 1MB , Topic1 -> 
max.message.bytes 1MB
Mirrormaker  - Batch size 500KB, message.request.size 100 Bytes. 
compression-gzip , ack -0 
Result: Seeing errors only on topic1on Kafka AGG . Rest of the topics 0,2...n 
are working fine. 

Current Configuration - Kafka AGG -> message.max.bytes 1MB , Topic1 -> 
max.message.bytes 2MB
Mirrormaker  - Batch size 500KB message.request.size 100 Bytes. 
compression-gzip , ack -0 
Result: Topic1 is no longer producing any errors on Kafka AGG. Messages of 
Topic1 are of size <= 100KB.

This looks like a bug. Why would mirrormaker send a message greater than the 
max.request.size or batch.size ? 


> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688640#comment-15688640
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Thanks will try the above steps 

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688610#comment-15688610
 ] 

huxi commented on KAFKA-4430:
-

Seems no debug statements can be enabled, but you could fire up a JConsole to 
check 'BytesRejectedPerSec' metrics which records the total bytes of rejected 
messages for a given topic. Besides, could you try to set 'message.max.bytes' 
to a much larger value, say, 2MB for instance to see if the problem still 
exists?

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685998#comment-15685998
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

I am still a noob . How do I confirm the above ? Any debug statements that I 
can enable ? Thanks and sorry for the inconvenience

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685911#comment-15685911
 ] 

huxi commented on KAFKA-4430:
-

Maybe you could confirm this:  complete serialized size of the message + 12 <= 
message.max.bytes


> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685885#comment-15685885
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Sorry I forgot to mention. I have set the message.max.bytes on the Aggregate 
Kafka Cluster to 1 MB. I could narrow it down to ProducerResponseCallback() in 
KafkaApi.scala where it is logged as INFO. So was wondering if this is an 
actual error and if it is dropping messages.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-21 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15685848#comment-15685848
 ] 

huxi commented on KAFKA-4430:
-

Check broker config 'message.max.bytes' for Aggregate Kafka cluster. The 
default value is 112 bytes which is less than 1MB. Try to increase this 
value to see if it works.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)