You can do a top -H to see which thread pid(s) is/are contributing to this.
And then map to hex of that in thread dump to identify the culprits. Might
lead to some additional clues.
Regards
Sab
On Sat, 28 Sep 2019, 1:38 am Antony A, wrote:
> Partition Leaders are pretty evenly matched between
Hi,
I'm running Kafka Streams v2.1.0 with windowing function and 3 threads
per node. The input traffic is about 120K messages/sec. Once deploy,
after couple minutes, some thread will get TimeoutException and goes
to DEAD state.
2019-09-27 13:04:34,449 ERROR [client-StreamThread-2]
Partition Leaders are pretty evenly matched between the brokers around 500
It is the kafka broker (java) process running at 550% on a 6 core VM. The
other brokers are running at 250% on a 4 core VMs.
On Fri, Sep 27, 2019 at 1:44 PM Harper Henn
wrote:
> Is partition leadership spread evenly
Is partition leadership spread evenly among the nodes in your cluster?
Since only the leaders of a partition will service reads and writes, one
broker could be using more CPU than the others if it was the leader for
more partitions.
Have you tried using a utility like top or htop? what processes
Hi,
I am running Kafka 1.0.1 on a 7 broker cluster. On one of the brokers the
CPU usage is always pegged around 98% utilization. If anyone had similar
issues, please can you comment on it?
Thanks,
AA
Enabling "read_committed" only ensures that a consumer does not return
uncommitted data.
However, on failure, a consumer might still read committed messages
multiple times (if you commit offsets after processing). If you commit
offsets before you process messages, and a failure happens before
You can achieve exactly once on a consumer by enabling read committed and
manually committing the offset as soon as you receive a message. That way
you know that at next poll you won't get old message again.
On Fri, Sep 27, 2019, 6:24 AM christopher palm wrote:
> I had a similar question, and
I had a similar question, and just watched the video on the confluent.io
site about this.
>From what I understand idempotence and transactions are there to solve the
duplicate writes and exactly once processing, respectively.
Is what you are stating below is that this only works if we produce
How did you arrive at the 10 GB JVM heap value? I'm running Kafka on 16 GB
RAM instances with ~4000 partitions each and only assigning 5 GB to JVM of
which Kafka only seems to be using ~2 GB at any given time.
Also, I've set vm.max_map_count to 262144 -- didn't use any formula to
estimate that,
Hello Kafka user group
I am running a Kafka cluster with 3 brokers and have been experiencing
frequent OutOfMemory errors each time with similar error stack trace
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:938)
at
I'm using Kafka in a system where data safety should be at its notch. To
set up a Kafka topic I need to guarantee that no message will be lost.
What I found is,
- if I use acks=1, we can't guarantee that messages will not be lost,
- if I use acks= all, I will have a good data safety but
11 matches
Mail list logo