rondagostino commented on a change in pull request #11503:
URL: https://github.com/apache/kafka/pull/11503#discussion_r752324936



##########
File path: core/src/main/scala/kafka/server/KafkaConfig.scala
##########
@@ -2011,12 +2012,68 @@ class KafkaConfig private(doLog: Boolean, val props: 
java.util.Map[_, _], dynami
       "offsets.commit.required.acks must be greater or equal -1 and less or 
equal to offsets.topic.replication.factor")
     require(BrokerCompressionCodec.isValid(compressionType), "compression.type 
: " + compressionType + " is not valid." +
       " Valid options are " + 
BrokerCompressionCodec.brokerCompressionOptions.mkString(","))
-    require(!processRoles.contains(ControllerRole) || 
controllerListeners.nonEmpty,
-      s"${KafkaConfig.ControllerListenerNamesProp} cannot be empty if the 
server has the controller role")
-
     val advertisedListenerNames = advertisedListeners.map(_.listenerName).toSet
+    if (usesSelfManagedQuorum) {
+      require(controlPlaneListenerName.isEmpty,
+        s"${KafkaConfig.ControlPlaneListenerNameProp} is not supported in 
KRaft mode.")
+      val sourceOfAdvertisedListeners = if 
(getString(KafkaConfig.AdvertisedListenersProp) != null)
+        s"${KafkaConfig.AdvertisedListenersProp}"
+      else
+        s"${KafkaConfig.ListenersProp}"
+      if (!processRoles.contains(BrokerRole)) {
+        // advertised listeners must be empty when not also running the broker 
role
+        require(advertisedListeners.isEmpty,
+          sourceOfAdvertisedListeners +
+            s" must only contain KRaft controller listeners from 
${KafkaConfig.ControllerListenerNamesProp} when 
${KafkaConfig.ProcessRolesProp}=controller")
+      } else {
+        // when running broker role advertised listeners cannot contain 
controller listeners
+        require(!advertisedListenerNames.exists(aln => 
controllerListenerNames.contains(aln.value())),
+          sourceOfAdvertisedListeners +
+            s" must not contain KRaft controller listeners from 
${KafkaConfig.ControllerListenerNamesProp} when ${KafkaConfig.ProcessRolesProp} 
contains the broker role")
+      }
+      if (processRoles.contains(ControllerRole)) {
+        // has controller role (and optionally broker role as well)
+        // controller.listener.names must be non-empty
+        // every one must appear in listeners
+        // the port appearing in controller.quorum.voters for this node must 
match the port of the first controller listener
+        // (we allow other nodes' voter ports to differ to support running 
multiple controllers on the same host)
+        require(controllerListeners.nonEmpty,
+          s"${KafkaConfig.ControllerListenerNamesProp} must contain at least 
one value appearing in the '${KafkaConfig.ListenersProp}' configuration when 
running the KRaft controller role")
+        val listenerNameValues = listeners.map(_.listenerName.value).toSet
+        require(controllerListenerNames.forall(cln => 
listenerNameValues.contains(cln)),
+          s"${KafkaConfig.ControllerListenerNamesProp} must only contain 
values appearing in the '${KafkaConfig.ListenersProp}' configuration when 
running the KRaft controller role")
+        val addressSpecForThisNode = 
RaftConfig.parseVoterConnections(quorumVoters).get(nodeId)
+        addressSpecForThisNode match {
+          case inetAddressSpec: RaftConfig.InetAddressSpec => {
+            val quorumVotersPort = inetAddressSpec.address.getPort
+            require(controllerListeners.head.port == quorumVotersPort,
+              s"Port in ${KafkaConfig.QuorumVotersProp} for this controller 
node (${KafkaConfig.NodeIdProp}=$nodeId, port=$quorumVotersPort) does not match 
the port for the first controller listener in 
${KafkaConfig.ControllerListenerNamesProp} 
(${controllerListeners.head.listenerName.value()}, 
port=${controllerListeners.head.port})")
+          }
+          case _ =>
+        }
+      } else {
+        // only broker role
+        // controller.listener.names must be non-empty
+        // none of them can appear in listeners
+        // warn that only the first one is used if there is more than one
+        require(controllerListenerNames.exists(_.nonEmpty),
+          s"${KafkaConfig.ControllerListenerNamesProp} must contain at least 
one value when running KRaft with just the broker role")
+        if (controllerListenerNames.size > 1) {
+          warn(s"${KafkaConfig.ControllerListenerNamesProp} has multiple 
entries; only the first will be used since 
${KafkaConfig.ProcessRolesProp}=broker: $controllerListenerNames")
+        }
+        require(controllerListeners.isEmpty,
+          s"${KafkaConfig.ControllerListenerNamesProp} must not contain a 
value appearing in the '${KafkaConfig.ListenersProp}' configuration when 
running KRaft with just the broker role")
+      }
+    } else {
+      // controller listener names must be empty when not in KRaft mode
+      require(!controllerListenerNames.exists(_.nonEmpty), 
s"${KafkaConfig.ControllerListenerNamesProp} must be empty when not running in 
KRaft mode: ${controllerListenerNames.asJava.toString}")

Review comment:
       Requiring controller listener names to be empty when not in KRaft mode 
is a new constraint that could cause existing ZK-based clusters to fail to 
restart.  The problem is that `controllerListeners()`, `dataPlaneListeners()`, 
and `advertisedListeners()` all consider the controller listeners when deciding 
what to return.  I believe this is technically incorrect in the ZooKeeper case 
because the documentation for `controller.listener.names` explicitly says "The 
ZK-based controller will not use this configuration."  I wasn't sure what to do 
here.  We could fix those 3 methods to only consider 
`controller.listener.names` when in KRaft mode, but it seemed more 
straightforward and less error-prone going forward to require that the config 
be empty in the ZooKeeper case -- then we can consider it all we want/wherever 
we want because it won't have any effect unless we are running KRaft.
   
   I also was not sure if this requires a KIP (either a new one or a change to 
an existing one)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to