When polling from kafka I am logging the number of records fetched.
I have 200 partitions for the topic and 12 Consumers each with 4 threads,
so around 4-5 partitions per Consumer, and if the Consumer is slow the
number of messages it's consuming increases to around 7000 messages.
Here is the
Going by the name of that property (max.partition.fetch.bytes), I'm
guessing it's the max fetch bytes per partition of a topic. Are you sure
the data you are receiving in that consumers doesn't belong to multiple
partitions and hence can/might exceed the value that's set per
partition? By the
Thanks a lot Jens for the reply.
One thing is still unclear is this happening only when we set the
max.partitions.fetch.bytes to a higher value ? Because I am setting it
quite lower at 8192 only instead, because I can control the size of the
data coming in Kafka, so even after setting this value
Hi,
This is a known issue. The 0.10 release will fix this. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
for some background.
Cheers,
Jens
Den ons 4 maj 2016 19:32Abhinav Solan skrev:
> Hi,
>
> I am using kafka-0.9.0.1 and
Hi,
I am using kafka-0.9.0.1 and have configured the Kafka consumer to fetch
8192 bytes by setting max.partition.fetch.bytes
Here are the properties I am using
props.put("bootstrap.servers", servers);
props.put("group.id", "perf-test");
props.put("offset.storage", "kafka");