Jakub Scholz created KAFKA-16075:
------------------------------------

             Summary: TLS configuration not validated in KRaft controller-only 
nodes
                 Key: KAFKA-16075
                 URL: https://issues.apache.org/jira/browse/KAFKA-16075
             Project: Kafka
          Issue Type: Bug
          Components: kraft
    Affects Versions: 3.6.1
            Reporter: Jakub Scholz


When the Kafka broker node (either a broker in ZooKeeper based cluster or node 
with a broker role in a KRaft cluster) has an incorrect TLS configuration such 
as unsupported TLS cipher suite, it seems to throw a {{ConfigException}} and 
shutdown:
{code:java}
2024-01-02 13:50:24,895 ERROR Exiting Kafka due to fatal exception during 
startup. (kafka.Kafka$) [main]
org.apache.kafka.common.config.ConfigException: Invalid value 
java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 for configuration A client SSLEngine 
created with the provided settings can't connect to a server SSLEngine created 
with those settings.
        at 
org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:102)
        at 
org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)
        at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
        at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
        at kafka.network.Processor.<init>(SocketServer.scala:973)
        at kafka.network.Acceptor.newProcessor(SocketServer.scala:879)
        at 
kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:849)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
        at kafka.network.Acceptor.addProcessors(SocketServer.scala:848)
        at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:523)
        at 
kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:251)
        at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:175)
        at 
kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:175)
        at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
        at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
        at kafka.network.SocketServer.<init>(SocketServer.scala:175)
        at kafka.server.BrokerServer.startup(BrokerServer.scala:242)
        at 
kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:96)
        at 
kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:96)
        at scala.Option.foreach(Option.scala:437)
        at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:96)
        at kafka.Kafka$.main(Kafka.scala:113)
        at kafka.Kafka.main(Kafka.scala) {code}
But in a KRaft controller-only nodes, such validation does not seem to happen 
and the broker keeps running and looping with this warning:
{code:java}
2024-01-02 13:53:10,186 WARN [RaftManager id=1] Error connecting to node 
my-cluster-controllers-0.my-cluster-kafka-brokers.myproject.svc.cluster.local:9090
 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient) 
[kafka-1-raft-outbound-request-thread]
java.io.IOException: Channel could not be created for socket 
java.nio.channels.SocketChannel[closed]
        at 
org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:348)
        at 
org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
        at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
        at 
org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:1032)
        at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
        at 
org.apache.kafka.server.util.InterBrokerSendThread.sendRequests(InterBrokerSendThread.java:145)
        at 
org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:108)
        at 
org.apache.kafka.server.util.InterBrokerSendThread.doWork(InterBrokerSendThread.java:136)
        at 
org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130)
Caused by: org.apache.kafka.common.KafkaException: 
java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        at 
org.apache.kafka.common.network.SslChannelBuilder.buildChannel(SslChannelBuilder.java:111)
        at 
org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:338)
        ... 8 more
Caused by: java.lang.IllegalArgumentException: Unsupported CipherSuite: 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        at 
java.base/sun.security.ssl.CipherSuite.validValuesOf(CipherSuite.java:978)
        at 
java.base/sun.security.ssl.SSLEngineImpl.setEnabledCipherSuites(SSLEngineImpl.java:864)
        at 
org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createSslEngine(DefaultSslEngineFactory.java:188)
        at 
org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createClientSslEngine(DefaultSslEngineFactory.java:93)
        at 
org.apache.kafka.common.security.ssl.SslFactory.createSslEngine(SslFactory.java:203)
        at 
org.apache.kafka.common.security.ssl.SslFactory.createSslEngine(SslFactory.java:189)
        at 
org.apache.kafka.common.network.SslChannelBuilder.buildTransportLayer(SslChannelBuilder.java:122)
        at 
org.apache.kafka.common.network.SslChannelBuilder.buildChannel(SslChannelBuilder.java:105)
        ... 9 more {code}
Is there some reason why this behavior differs and the KRaft controller-only 
nodes do not seem to do the same validation?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to