?????? How to reduce kafka's rebalance time ?

2018-08-16 Thread ????de????
I modified some consumer configuration items, mainly about the parameters of 
fetch frequency, heartbeat, and session timeout. The problem of long time 
rebalance has not been found in the test environment for a long time. The 
relevant configuration is as follows:fetch-max-wait: 1s
heartbeat-interval: 1s
session.timeout.ms: 1
metadata.max.age.ms: 6000
max.poll.records: 100
max.poll.interval.ms: 500




--  --
??: "Shantanu Deshmukh";
: 2018??8??16??(??) 2:25
??: "users";

: Re: How to reduce kafka's rebalance time ?



I am also facing the same issue. Whenever I am restarting my consumers it
is taking upto 10 minutes to start consumption. Also some of the consumers
randomly rebalance and it again takes same amount of time to complete
rebalance.
I haven't been able to figure out any solution for this issue, nor have I
received any help from here.

On Thu, Aug 16, 2018 at 9:56 AM de  wrote:

> hello:
> How to reduce kafka's rebalance time ?
> It takes a lot of time to rebalance each time. Why?

Re: Please help: Zookeeper not coming up after power down

2018-08-16 Thread Dan Simoes
Ensure ids/nodes are correct in zoo.cfg and zookeeper is running on each. Also 
any changes to ports being open?  If it’s aws, beck security groups.   Node 1 
cannot talk to the other two nodes. 

> On Aug 16, 2018, at 6:02 PM, Raghav  wrote:
> 
> Hi
> 
> Our 3 node Zookeeper ensemble got powered down, and upon powering up the
> zookeeper could get quorum and kept throwing these errors. As a result our
> Kafka cluster was unusable. What is the best way to revive ZK cluster in
> such situations ? Please suggest.
> 
> 
> 2018-08-17_00:59:18.87009 2018-08-17 00:59:18,869 [myid:1] - WARN
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
> open channel to 2 at election address /1.1.1.143:3888
> 2018-08-17_00:59:18.87011 java.net.ConnectException: Connection refused
> 2018-08-17_00:59:18.87011   at
> java.net.PlainSocketImpl.socketConnect(Native Method)
> 2018-08-17_00:59:18.87011   at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
> 2018-08-17_00:59:18.87012   at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> 2018-08-17_00:59:18.87012   at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> 2018-08-17_00:59:18.87013   at
> java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 2018-08-17_00:59:18.87013   at java.net.Socket.connect(Socket.java:589)
> 2018-08-17_00:59:18.87013   at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
> 2018-08-17_00:59:18.87014   at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
> 2018-08-17_00:59:18.87014   at
> org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
> 2018-08-17_00:59:18.87014   at
> org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)
> 2018-08-17_00:59:18.87034 2018-08-17 00:59:18,870 [myid:1] - INFO
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@184] -
> Resolved hostname: 1.1.1.143 to address: /1.1.1.143
> 2018-08-17_00:59:18.87095 2018-08-17 00:59:18,870 [myid:1] - WARN
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
> open channel to 3 at election address /1.1.1.144:3888
> 2018-08-17_00:59:18.87097 java.net.ConnectException: Connection refused
> 2018-08-17_00:59:18.87097   at
> java.net.PlainSocketImpl.socketConnect(Native Method)
> 2018-08-17_00:59:18.87097   at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
> 2018-08-17_00:59:18.87098   at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> 2018-08-17_00:59:18.87098   at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> 2018-08-17_00:59:18.87098   at
> java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 2018-08-17_00:59:18.87098   at java.net.Socket.connect(Socket.java:589)
> 2018-08-17_00:59:18.87099   at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
> 2018-08-17_00:59:18.87099   at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
> 2018-08-17_00:59:18.87099   at
> org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
> 2018-08-17_00:59:18.87099   at
> org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)
> 
> Thanks.
> 
> R


Please help: Zookeeper not coming up after power down

2018-08-16 Thread Raghav
Hi

Our 3 node Zookeeper ensemble got powered down, and upon powering up the
zookeeper could get quorum and kept throwing these errors. As a result our
Kafka cluster was unusable. What is the best way to revive ZK cluster in
such situations ? Please suggest.


2018-08-17_00:59:18.87009 2018-08-17 00:59:18,869 [myid:1] - WARN
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
open channel to 2 at election address /1.1.1.143:3888
2018-08-17_00:59:18.87011 java.net.ConnectException: Connection refused
2018-08-17_00:59:18.87011   at
java.net.PlainSocketImpl.socketConnect(Native Method)
2018-08-17_00:59:18.87011   at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
2018-08-17_00:59:18.87012   at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
2018-08-17_00:59:18.87012   at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
2018-08-17_00:59:18.87013   at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
2018-08-17_00:59:18.87013   at java.net.Socket.connect(Socket.java:589)
2018-08-17_00:59:18.87013   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)
2018-08-17_00:59:18.87034 2018-08-17 00:59:18,870 [myid:1] - INFO
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@184] -
Resolved hostname: 1.1.1.143 to address: /1.1.1.143
2018-08-17_00:59:18.87095 2018-08-17 00:59:18,870 [myid:1] - WARN
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
open channel to 3 at election address /1.1.1.144:3888
2018-08-17_00:59:18.87097 java.net.ConnectException: Connection refused
2018-08-17_00:59:18.87097   at
java.net.PlainSocketImpl.socketConnect(Native Method)
2018-08-17_00:59:18.87097   at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
2018-08-17_00:59:18.87098   at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
2018-08-17_00:59:18.87098   at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
2018-08-17_00:59:18.87098   at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
2018-08-17_00:59:18.87098   at java.net.Socket.connect(Socket.java:589)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)

Thanks.

R


Re: How to reduce kafka's rebalance time ?

2018-08-16 Thread Steve Tian
My $0.02:
1. Read the documentation.
2. Help people to understand your problem better:  Try to describe your
problem in a gist and see if you can provide some details like: which
version of Kafka you're using on client/server side, your
consumer/producer/server code/configuration that can reproduce your
problem.
3. Help yourself to understand your problem better:  Is the problem still
reproducible in different version/cluster/configuration/process logic?  For
example, you found there is a 10 minutes delay, what kind of configuration
is 10 minutes related?  Will delay get changed when you change your
configurations?  How long does it take to process records returned from a
single poll?
4. Learn from your logging/metrics:  If you can reproduce the problem, try
to see if anything unusual in the logs/metrics.  Try too enable/add logging
in your consumer, consumer rebalance listener and interceptor.   Try to
close your consumer gracefully/abruptly, try to kill your Java consumer
process, or even try to pause your Java consumer process to simulate gc
pause.   You should find something in the logs and JMX metrics, try to
reason about them.   You will need this for production
troubleshooting/monitoring.

Cheers,
Steve


On Thu, Aug 16, 2018, 7:48 PM Shantanu Deshmukh 
wrote:

> Hi Manna,
>
> I meant no offense. Simply meant to say that haven't found solution to my
> problem from here.
> Apologies, if my sentence was off the line.
>
> On Thu, Aug 16, 2018 at 4:05 PM M. Manna  wrote:
>
> > You have been recommended to upgrade to a newer version of Kafka, or tune
> > timeout params. Adhering to a older version is more of the users’
> decision.
> > Perhaps, we should simply put older versions as “End of Life”.
> >
> > As part of open source initiative, you are always welcome to debug and
> > demonstrate how your use case is different, and raise a KIP.
> >
> > Not sure what you mean by “*no have I received any help from here.” *
> >
> > We are always actively trying to contribute as much as we can, and
> > sometimes the answers may not be according to your expectations or
> > timeline. Hence, the open source initiative.
> >
> > Hope this makes sense.
> >
> > Regards,
> >
> > Regards,
> > On Thu, 16 Aug 2018 at 06:55, Shantanu Deshmukh 
> > wrote:
> >
> > > I am also facing the same issue. Whenever I am restarting my consumers
> it
> > > is taking upto 10 minutes to start consumption. Also some of the
> > consumers
> > > randomly rebalance and it again takes same amount of time to complete
> > > rebalance.
> > > I haven't been able to figure out any solution for this issue, nor
> have I
> > > received any help from here.
> > >
> > > On Thu, Aug 16, 2018 at 9:56 AM 堅強de泡沫  wrote:
> > >
> > > > hello:
> > > > How to reduce kafka's rebalance time ?
> > > > It takes a lot of time to rebalance each time. Why?
> > >
> >
>


Interest in automatic topology/rack awareness?

2018-08-16 Thread Michael Gasch
Hi,

Today, the configuration of rack awareness for brokers (broker.rack=my-rack-id) 
is either manual or via a script during deployment of a broker.

Other systems, like Hadoop or Kubernetes, have support for auto-detection of 
topologies, often called well-known labels like racks, zones or regions.

Many Kafka deployments I support on the VMware vSphere hypervisor have rack 
awareness as a requirement to increase availability for the brokers in case of 
host (hypervisor) failure. For this to work, several actions (and 
communication) across different teams has to happen, which is error-prone.

Would there be interest in having a functionality to auto-detect the underlying 
topology, e.g. via a pluggable mechanism (interface) or a dedicated 
implementation for vSphere? A Kafka broker could make a call to "GetZones()" 
(or "GetRacks") and the specific implementation of the cloud provider would 
respond with the current rack/zone where this broker is running.

Thx for any feedback,
Michael


Re: Very long consumer rebalances

2018-08-16 Thread Shantanu Deshmukh
I saw a few topics with segment.ms and retention.ms property set. Can that
be causing any issue? I remember that this is the only change I carried out
to the cluster in last couple of months after which the problem started.

On Fri, Aug 10, 2018 at 2:55 PM M. Manna  wrote:

> if you can upgrade, I would say upgrading to 0.10.2.x would be better for
> you (or even higher, 2.0.0). Otherwise you have to play around with
> max.poll.records and session.timeout.ms.
>
> As the doc says (or newer versions), the adjustment should be such that
> request.timeout.ms >= max.poll.interval.ms. Also, heartbeat.interval.ms
> should be curbed at (rule of thumb) 30% of session.timeout.ms.
>
> Lastly, all these have to be within the bounds of
> group.min.session.timeout.ms and group.max.session.timeout.ms.
>
> You can check all these, tune them as necessary, and retry. Some of these
> configs may or may not be applicable at runtime. so a rolling restart may
> be required before all changes take place.
>
> On 9 August 2018 at 13:48, Shantanu Deshmukh 
> wrote:
>
> > Hi,
> >
> > Yes my consumer application works like below
> >
> >1. Reads how many workers are required to process each topics from
> >properties file
> >2. As many threads are spawned as there are workers mentioned in
> >properties file, topic name is passed to this thread. FixedThreadPool
> >implementation is used.
> >3. Each worker thread initializes one consumer object and subscribes
> to
> >given topic. Consumer group is simply -consumer. So if my
> > topic
> >bulk-email, then consumer group for all those threads is
> > bulk-email-consumer
> >4. Once this is done, inside an infinite while loop consumer.poll(100)
> >method keeps running. So this application is a daemon. Shuts down only
> > when
> >server shuts down or in case of kill command.
> >
> > I have configured session.timeout.ms in consumer properties. I haven't
> > done
> > anything about zookeeper timeout. Is it required now? Since consumer
> > accesses only the brokers.
> >
> > On Thu, Aug 9, 2018 at 3:03 PM M. Manna  wrote:
> >
> > > In the simplest way, how have you implemented your consumer?
> > >
> > > 1) Does your consumers join a designated group, process messages, and
> > then
> > > closes all connection? Or does it stay open perpetually until server
> > > shutdown?
> > > 2) Have you configured the session timeouts for client and zookeeper
> > > accordingly?
> > >
> > > Regards,
> > >
> > > On 9 August 2018 at 08:00, Shantanu Deshmukh 
> > > wrote:
> > >
> > > >  I am facing too many problems these days. Now one of our consumer
> > groups
> > > > is rebalancing every now and then. And rebalance takes very low, more
> > > than
> > > > 5-10 minutes. Even after re-balancing I see that only half of the
> > > consumers
> > > > are active/receive assignment. Its all going haywire.
> > > >
> > > > I am seeing these logs in kafka consumer logs. Can anyone help me
> > > > understand what is going on here? It is a very long piece of log, but
> > > > someone please help me. I am desperately looking for any solution
> since
> > > > more than 2 months now. But to no avail.
> > > >
> > > > [2018-08-09 11:39:51] :: DEBUG ::
> > > > AbstractCoordinator$HeartbeatResponseHandler:694 - Received
> successful
> > > > heartbeat response for group bulk-email-consumer
> > > > [2018-08-09 11:39:53] :: DEBUG ::
> > > > ConsumerCoordinator$OffsetCommitResponseHandler:640 - Group
> > > > bulk-email-consumer committed offset 25465113 for partition
> > bulk-email-8
> > > > [2018-08-09 11:39:53] :: DEBUG :: ConsumerCoordinator$4:539 -
> Completed
> > > > autocommit of offsets
> {bulk-email-8=OffsetAndMetadata{offset=25465113,
> > > > metadata=''}} for group bulk-email-consumer
> > > > [2018-08-09 11:39:53] :: DEBUG ::
> > > > ConsumerCoordinator$OffsetCommitResponseHandler:640 - Group
> > > > bulk-email-consumer committed offset 25463566 for partition
> > bulk-email-6
> > > > [2018-08-09 11:39:53] :: DEBUG :: ConsumerCoordinator$4:539 -
> Completed
> > > > autocommit of offsets
> {bulk-email-6=OffsetAndMetadata{offset=25463566,
> > > > metadata=''}} for group bulk-email-consumer
> > > > [2018-08-09 11:39:53] :: DEBUG ::
> > > > ConsumerCoordinator$OffsetCommitResponseHandler:640 - Group
> > > > bulk-email-consumer committed offset 2588 for partition
> > bulk-email-9
> > > > [2018-08-09 11:39:53] :: DEBUG :: ConsumerCoordinator$4:539 -
> Completed
> > > > autocommit of offsets
> {bulk-email-9=OffsetAndMetadata{offset=2588,
> > > > metadata=''}} for group bulk-email-consumer
> > > > [2018-08-09 11:39:54] :: DEBUG ::
> > > > AbstractCoordinator$HeartbeatResponseHandler:694 - Received
> successful
> > > > heartbeat response for group bulk-email-consumer
> > > > [2018-08-09 11:39:54] :: DEBUG ::
> > > > AbstractCoordinator$HeartbeatResponseHandler:694 - Received
> successful
> > > > heartbeat response for group bulk-email-consumer
> > > > [2018-08-09 11:39:54] :: DEBUG ::
> > > 

Re: How to reduce kafka's rebalance time ?

2018-08-16 Thread Shantanu Deshmukh
Hi Manna,

I meant no offense. Simply meant to say that haven't found solution to my
problem from here.
Apologies, if my sentence was off the line.

On Thu, Aug 16, 2018 at 4:05 PM M. Manna  wrote:

> You have been recommended to upgrade to a newer version of Kafka, or tune
> timeout params. Adhering to a older version is more of the users’ decision.
> Perhaps, we should simply put older versions as “End of Life”.
>
> As part of open source initiative, you are always welcome to debug and
> demonstrate how your use case is different, and raise a KIP.
>
> Not sure what you mean by “*no have I received any help from here.” *
>
> We are always actively trying to contribute as much as we can, and
> sometimes the answers may not be according to your expectations or
> timeline. Hence, the open source initiative.
>
> Hope this makes sense.
>
> Regards,
>
> Regards,
> On Thu, 16 Aug 2018 at 06:55, Shantanu Deshmukh 
> wrote:
>
> > I am also facing the same issue. Whenever I am restarting my consumers it
> > is taking upto 10 minutes to start consumption. Also some of the
> consumers
> > randomly rebalance and it again takes same amount of time to complete
> > rebalance.
> > I haven't been able to figure out any solution for this issue, nor have I
> > received any help from here.
> >
> > On Thu, Aug 16, 2018 at 9:56 AM 堅強de泡沫  wrote:
> >
> > > hello:
> > > How to reduce kafka's rebalance time ?
> > > It takes a lot of time to rebalance each time. Why?
> >
>


Re: How to reduce kafka's rebalance time ?

2018-08-16 Thread M. Manna
You have been recommended to upgrade to a newer version of Kafka, or tune
timeout params. Adhering to a older version is more of the users’ decision.
Perhaps, we should simply put older versions as “End of Life”.

As part of open source initiative, you are always welcome to debug and
demonstrate how your use case is different, and raise a KIP.

Not sure what you mean by “*no have I received any help from here.” *

We are always actively trying to contribute as much as we can, and
sometimes the answers may not be according to your expectations or
timeline. Hence, the open source initiative.

Hope this makes sense.

Regards,

Regards,
On Thu, 16 Aug 2018 at 06:55, Shantanu Deshmukh 
wrote:

> I am also facing the same issue. Whenever I am restarting my consumers it
> is taking upto 10 minutes to start consumption. Also some of the consumers
> randomly rebalance and it again takes same amount of time to complete
> rebalance.
> I haven't been able to figure out any solution for this issue, nor have I
> received any help from here.
>
> On Thu, Aug 16, 2018 at 9:56 AM 堅強de泡沫  wrote:
>
> > hello:
> > How to reduce kafka's rebalance time ?
> > It takes a lot of time to rebalance each time. Why?
>


Re: unable to reconfigure ssl truststore dynamically on broker

2018-08-16 Thread Manikumar
Yes, we have open JIRA for this:
https://issues.apache.org/jira/browse/KAFKA-4493

On Wed, Aug 15, 2018 at 10:12 PM John Calcote 
wrote:

> Thanks Manikumar - that's very helpful. I never thought to treat the
> AdminClient like the broker or clients and look for a configuration options
> set on that page.
>
> I should point out to those monitoring that have some control over the code
> (perhaps yourself even) - it seems wrong to have a program that crashes
> when running on the default configuration. Just a thought.
>
> Kind regards,
> John
>
> On Tue, Aug 14, 2018 at 11:18 AM Manikumar 
> wrote:
>
> > Hi,
> >
> > There is no specific doc. "--command-config" option takes admin client
> > configs (.
> > AdminClient configs are listed here:
> > http://kafka.apache.org/documentation/#adminclientconfigs
> >
> >
> > On Tue, Aug 14, 2018 at 10:35 PM John Calcote 
> > wrote:
> >
> > > Manikumar,
> > >
> > > Thank you. You are right - the security.protocol is NOT being passed to
> > > adminclient.properties file. Where exactly is the doc for that file? I
> > > searched for hours and finally had to guess at which options should be
> in
> > > there.
> > >
> > > J
> > >
> > > On Tue, Aug 14, 2018, 8:54 AM Manikumar 
> > wrote:
> > >
> > > > looks like port is wrong or security.protocol config is not passed to
> > > > adminclient.properties file
> > > >
> > > > On Tue, Aug 14, 2018 at 7:23 PM John Calcote  >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm using the latest kafka broker - 2.0.0 with scala 2.12. I have a
> > > > > complete SSL configuration working, but I add clients occasionally
> > and
> > > > want
> > > > > to be able to tell the broker to reload it's ssl truststore (as new
> > > certs
> > > > > have been added to it).
> > > > >
> > > > > I've followed the doc here:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration
> > > > >
> > > > > and this is what I get:
> > > > >
> > > > > $ kafka_2.12-2.0.0/bin/kafka-configs.sh --bootstrap-server
> > > > > data-cluster:9092 --command-config
> > > > > ./kafka_2.12-2.0.0/adminclient.properties --alter --add-config
> > > > > ssl.truststore.password=password --entity-name 0 --entity-type
> > brokers
> > > > > [2018-08-14 07:48:57,223] ERROR Uncaught exception in thread
> > > > > 'kafka-admin-client-thread | adminclient-1':
> > > > > (org.apache.kafka.common.utils.KafkaThread)
> > > > > java.lang.OutOfMemoryError: Java heap space
> > > > > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> > > > > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335)
> > > > > at
> > > > >
> > >
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296)
> > > > > at
> > > > >
> > org.apache.kafka.common.network.Selector.attemptRead(Selector.java:560)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:496)
> > > > > at
> > > > org.apache.kafka.common.network.Selector.poll(Selector.java:425)
> > > > > at
> > > > > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1116)
> > > > > at java.lang.Thread.run(Thread.java:748)
> > > > > Error while executing config command with args '--bootstrap-server
> > > > > data-cluster:9092 --command-config
> > > > > ./kafka_2.12-2.0.0/adminclient.properties --alter --add-config
> > > > > ssl.truststore.password=password --entity-name 0 --entity-type
> > brokers'
> > > > > java.util.concurrent.TimeoutException
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
> > > > > at
> > > > kafka.admin.ConfigCommand$.brokerConfig(ConfigCommand.scala:346)
> > > > > at
> > > > >
> kafka.admin.ConfigCommand$.alterBrokerConfig(ConfigCommand.scala:304)
> > > > > at
> > > > >
> > kafka.admin.ConfigCommand$.processBrokerConfig(ConfigCommand.scala:290)
> > > > > at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:83)
> > > > > at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> > > > > $
> > > > >
> > > > > There's nothing on the net about this, so I can only assume the
> issue
>