Hello,
I have a Kafka Streams application that is consuming from two topics and
internally aggregating, transforming and joining data. Recently I was doing
a rolling restart of the production Kafka brokers and the following errors
appeared in my Kafka Streams application:
2018-04-12 10:39:13,097
For whatever is worth and from memory
In previous client versions (may have been fixed in 0.11), we had 3 consumers
in the same consumer group, when a topic partition reassignment happened and 2
consumers had partitions and but the other one did not get any. So you could be
in that scenario but
HI,
After configuration the kafka and zookeeper. it is not displaying the
kafka.consumer metrics.
I am not able to see the following metrics.
kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)
I am using the following version.
*kafka:*
Can you take a look at KAFKA-6156 and see if your cluster had the same
issue ?
Thanks
On Thu, Apr 12, 2018 at 7:50 AM, ishwar panjari
wrote:
> HI,
>
> After configuration the kafka and zookeeper. it is not displaying the
> kafka.consumer metrics.
>
>
> I am not able to see the following metrics
Hi Emmett,
ListOffsets API tells you about the log segments belonging to (the given
partitions of) a topic.
I think I better explain how it behaves by example.
I have a topic called 'test2' with three partitions (0..2). I produced 2665
messages to its partition 0. I set up the topic so that it rol
Wow, thanks so much for the detailed explanation. I've verified that I can
replicate your results, and adapted it for inclusion in pykafka's
documentation, since this API is a constant source of confusion for users.
https://github.com/Parsely/pykafka/commit/340f8e16c8b0d9830b6fd889f3be7570015ba3eb
Hi all ,
I am using kafka 0.10.
log.retention.bytes = 5000
log.retention.check.interval.ms = 6000
log.retention.hours = 24
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.j
For log.retention.bytes :
A size-based retention policy for logs. Segments are pruned from the log
unless the remaining segments drop below log.retention.bytes
This config is per partition.
For log.segment.bytes :
The maximum size of a log segment file. When this size is reached a new log
segment
Hi Amit,
This is from the broker config section of the very good documentation on the
kafka web site: https://kafka.apache.org/0100/documentation.html#brokerconfigs
log.segment.bytes: The maximum size of a single log file (default 1GB)
log.retention.bytes: The maximum size of the log before del