I actually meant to say that you typically don't need to bump up the
queued chunk setting - you can profile your consumer to see if
significant time is being spent waiting on dequeuing from the chunk
queues.

If you happen to have a consumer consuming from a remote data center,
then you should consider bumping up the socket buffer size:
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Howtoimprovethethroughputofaremoteconsumer?

Auto-commit on/off does not affect the throughput of a consumer.

In case message handling overhead is significant (e.g., if you need to
make a call to some remote service), you can hand off the messages to
a thread-pool for processing.

Joel


On Tue, Nov 04, 2014 at 10:55:26AM -0800, Bhavesh Mistry wrote:
> Thanks for info.  I will have to tune the memory. What else do you
> recommend for High level Consumer for optimal performance and drain as
> quickly as possible with auto commit on ?
> 
> Thanks,
> 
> Bhavesh
> 
> On Tue, Nov 4, 2014 at 9:59 AM, Joel Koshy <jjkosh...@gmail.com> wrote:
> 
> > We used to default to 10, but two should be sufficient. There is
> > little reason to buffer more than that. If you increase it to 2000 you
> > will most likely run into memory issues. E.g., if your fetch size is
> > 1MB you would enqueue 1MB*2000 chunks in each queue.
> >
> > On Tue, Nov 04, 2014 at 09:05:44AM -0800, Bhavesh Mistry wrote:
> > > Hi Kafka Dev Team,
> > >
> > > It seems that Maximum buffer size is set to  2 default.  What is impact
> > of
> > > changing this to 2000 or so ?   This will improve the consumer thread
> > > performance ?  More event will be buffered in memory.  Or Is there any
> > > other recommendation to tune High Level Consumers ?
> > >
> > > Here is code from Kafka Trunk Branch:
> > >
> > >   val MaxQueuedChunks = 2
> > >   /** max number of message chunks buffered for consumption, each chunk
> > can
> > > be up to fetch.message.max.bytes*/
> > >   val queuedMaxMessages = props.getInt("queued.max.message.chunks",
> > > MaxQueuedChunks)
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Bhavesh
> >

Reply via email to