YouTube
-Original Message-
From: Ben Stopford [mailto:b...@confluent.io]
Sent: Friday, June 24, 2016 8:41 AM
To: users@kafka.apache.org
Subject: Re: Setting max fetch size for the console consumer
It’s actually more than one setting:
http://stackoverflow.com/questions/21020347/kafka-sending
t;
> How do I set the maximum fetch size for the console consumer?
>
> I'm getting this error when doing some testing with large messages:
>
> kafka.common.MessageSizeTooLargeException: Found a message larger than the
> maximum fetch size of this consumer on topic replicated_twice p
How do I set the maximum fetch size for the console consumer?
I'm getting this error when doing some testing with large messages:
kafka.common.MessageSizeTooLargeException: Found a message larger than the
maximum fetch size of this consumer on topic replicated_twice partition 28 at
fetch
IIRC, just add it to your flume configs, eg. for a source:
tier1.sources.src1.kafka.fetch.message.max.bytes=
Thanks,
--tim
On Tue, Jan 12, 2016 at 7:25 AM, manish jaiswal <manishsr...@gmail.com> wrote:
> i m trying to read more than 1mb msg from kafka using flume
> and i m gettin
that your fetch size needs
to increase.
Thanks,
Joel
On Thu, Jul 02, 2015 at 05:32:20PM +0200, Stevo Slavić wrote:
Hello Apache Kafka community,
Couldn't broker return a special error code in FetchResponse for a given
partition(s) where it detects that there was something to return
Hello Apache Kafka community,
Couldn't broker return a special error code in FetchResponse for a given
partition(s) where it detects that there was something to return/read from
partition but actual FetchResponse contains no messages for that partition
since fetch size in FetchRequest
are using the simple consumer, the fetch response includes
the high watermark - so if the HW your fetch offset but there are
no iterable messages, then you will know that your fetch size needs
to increase.
Thanks,
Joel
On Thu, Jul 02, 2015 at 05:32:20PM +0200, Stevo Slavić wrote:
Hello Apache
range). Is it possible to set
the fetch size to a lower number than the max message size and gracefully
handle larger messages (as a trapped exception for example) in order to improve
our throughput?
Thank you in advance for your help
CJ Woolard
Hello -- I'll try to look at the code, but I'm seeing something here
and I want to be *sure* I'm correct.
Say a batch sitting in a 0.72 partition is, say, 5MB in size. An
instance of a high-level consumer has a configured fetch size of
300KB. This actually becomes the maxSize value, right
Yes, it is the maxSize parameter in FetchRequest. And the consumer won't
stall unless a single message (compressed or not) is larger than the fetch
size, in this case 300KB. It doesn't matter how big a batch is.
Thanks,
Neha
On Fri, May 31, 2013 at 11:25 PM, Philip O'Toole phi...@loggly.com
On Sat, Jun 1, 2013 at 1:25 PM, Neha Narkhede neha.narkh...@gmail.com wrote:
Yes, it is the maxSize parameter in FetchRequest. And the consumer won't
stall unless a single message (compressed or not) is larger than the fetch
size, in this case 300KB. It doesn't matter how big a batch is.
Thanks
11 matches
Mail list logo