from unclean shutdown.
We generally set it to the number of CPUs in the system, because we want a
fast recovery.
-Todd
On Mon, Aug 10, 2015 at 8:57 AM, Alexey Sverdelov
alexey.sverde...@googlemail.com wrote:
Hi all,
I have a 3 node Kafka cluster. There are ten topics, every topic has
Hi all,
I have a 3 node Kafka cluster. There are ten topics, every topic has 600
partitions with RF3.
So, after cluster restart I can see the following log message like INFO
Recovering unflushed segment 0 in log... and the complete recovery of 3
nodes takes about 2+ hours.
I don't know why it
Hi everyone,
we run load tests against our web application (about 50K req/sec) and every
time a kafka broker dies (also controlled shutdown), the producer tries to
connect with the dead broker for about 10-15 minutes. For this time the
application monitoring shows a constant error rate (about of
s up to 2 first. Also, what is your topicĀ¹s
> configuration?
> -Erik
>
> On 8/28/15, 8:36 AM, "Alexey Sverdelov" <alexey.sverde...@googlemail.com>
> wrote:
>
> >Hi everyone,
> >
> >we run load tests against our web application (about 50K req/sec)
Hi,
I'm facing an issue with high level kafka consumer (0.8.2.0) - after
consuming some amount of data one of our consumers stops. After restart it
consumes some messages and stops again with no error/exception or warning.
After some investigation I found that the "ConsumerFetcherThread" for my
kafka.message.Message.ensureValid(Message.scala:166)
~[org.apache.kafka.kafka_2.11-0.8.2.0.jar:na]
==
Any ideas how can I simple ignore such messages at all?
Thanks.
On Thu, Oct 1, 2015 at 1:05 PM, Alexey Sverdelov <
alexey.sverde...@googlemail.com> wrote:
> Hi,
>
> I'm facing an issue with high level
Hi Marina,
this is how I "fixed" this problem:
http://stackoverflow.com/questions/32904383/apache-kafka-with-high-level-consumer-skip-corrupted-messages/32945841
This is a workaround and I hope it will be fixed in some of next Kafka
releases.
Have a nice day,
Alexey
On Fri, Oct 2, 2015 at 2:57