I think /broker/ids/ is registered as a temporary znode when the broker is down
and znode will be removed .
-- --
??: "harish lohar";
: 2018??6??7??(??) 11:41
??: "users";
: kafka Broker Znode TTL
Do Kafka Broker has co
There's no easy way to kick out a running broker from the cluster.
If you block that broker's ability to connect to Zookeeper, after
configured timeouts (6 seconds by default I think) you might effectively
get that though. iptable rules on the ZK hosts, or the brokers, or
whatever hook you have f
All,
I'd like to start the discussion for adding an overloaded method to
StreamsBuilder taking a java.util.Properties instance.
The KIP is located here :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-312%3A+Add+Overloaded+StreamsBuilder+Build+Method+to+Accept+java.util.Properties
I look
Hi Jacob,
That could be a reason, but what about just a kernel failure or whatever
other reason? My question was not to determine the best environment to run,
but whether it would be possible to fail fast should this type of issues
pop up.
Regards.
On June 8, 2018 7:43:11 PM Jacob Sheck wr
What do you mean by "The issue appears when one of the brokers starts
being impacted
by environmental issues within the server it's running into (for whatever
reason)"?
You should consider Kafka to be a first tier service, it shouldn't be
deployed on shared resources. There are a lot of opinions
Hi,
I was wondering if there is a proper way or best practices to fail fast a
broker when it's unresponsive (think about the server it's running on has
issues). Let me describe the scenario I'm currently facing.
This is a 4 broker cluster using Kafka 1.1 with 5 ZK nodes, everything
running on co
Hi,
I have the case very often that I need to find out the Start and End Offset of
a Kafka Topic.
Usually I go to the server to the place where the Topic contents (segments) are
stored and look at the file names. This I do this way to avoid using Console
Consumers since they make use of Offs
Thanks Francis,
I think it's because of the ad-hoc request nature of what we want. It's not
something we always want on. I'll take a look at KSQL
Thanks,
Ben
-Original Message-
From: Francis Siefken [mailto:fran...@axual.io]
Sent: 08 June 2018 09:21
To: users@kafka.apache.org
Subject: R
Hi Ben, you mentioned 'We can't connect them via connect', but connect
with it's filestream export and import was my first thought. If you
want to export/import time segments you could use KSQL, would this not
fulfil your requirement?
Francis
On Thu, Jun 7, 2018 at 7:47 PM, Young, Ben
wrote:
> H