Hi Swapnil, Thanks for the quick response.
So if I am using a SimpleConsumer, I will need to create the fetch request like the following, right? // Fetch size as 5MB long fetchSize="5*1024*1024" FetchRequest fetchRequest = new FetchRequest("test", 0, offset, fetchSize); Thanks, Puneet Mehta On Monday, September 24, 2012 at 4:06 PM, Swapnil Ghike wrote: > Hi Puneet, > > Yes you will need to bump up the maxMessageSize in server.KafkaConfig and > fetchSize in consumer.ConsumerConfig. > > server.KafkaConfig.maxMessageSize can be same as > producer.ProducerConfig.maxMessageSize. > > > You can set consumer.ConsumerConfig.fetchsize to a value greater than or > equal to producer.ProducerConfig.maxMessageSize. > > Thanks, > Swapnil > > On 9/24/12 3:51 PM, "Puneet Mehta" <mehta.p...@gmail.com > (mailto:mehta.p...@gmail.com)> wrote: > > > Hi all, > > > > We are using kafka 0.7.1 > > > > I am seeing this error in the producer -> > > > > kafka.common.MessageSizeTooLargeException > > at > > kafka.producer.SyncProducer$$anonfun$kafka$producer$SyncProducer$$verifyMe > > ssageSize$1.apply(SyncProducer.scala:141) > > at > > kafka.producer.SyncProducer$$anonfun$kafka$producer$SyncProducer$$verifyMe > > ssageSize$1.apply(SyncProducer.scala:139) > > at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30) > > at kafka.message.MessageSet.foreach(MessageSet.scala:87) > > at > > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$verifyMessageSize > > (SyncProducer.scala:139) > > at kafka.producer.SyncProducer.send(SyncProducer.scala:113) > > at > > kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(ProducerPool.sca > > la:116) > > at > > kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:102) > > at > > kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:102) > > at kafka.producer.ProducerPool.send(ProducerPool.scala:102) > > at kafka.producer.Producer.zkSend(Producer.scala:143) > > at kafka.producer.Producer.send(Producer.scala:105) > > at kafka.javaapi.producer.Producer.send(Producer.scala:104) > > > > We are using the max message size as 1 <1000000>000000 <1000000> bytes. > > > > I am planning to bump this up to say 5 <5000000>000000 <5000000> bytes. > > > > I am just wondering do I need to change any other properties in > > producer/broker/consumer that may get impacted by this bump up? > > > > Also, I came across this thread, which relates to fetch size relative to > > max message size -> https://issues.apache.org/jira/browse/KAFKA-247 > > > > Could anyone of you advise me on places I may get impacted and change > > accordingly? > > > > > > Thanks, > > Puneet Mehta > > > > >