hachikuji commented on code in PR #12108:
URL: https://github.com/apache/kafka/pull/12108#discussion_r867248444


##########
core/src/main/scala/kafka/server/ConfigAdminManager.scala:
##########
@@ -499,7 +499,8 @@ object ConfigAdminManager {
             
.orElse(Option(ConfigDef.convertToString(configKeys(configPropName).defaultValue,
 ConfigDef.Type.LIST)))
             .getOrElse("")
             .split(",").toList
-          val newValueList = oldValueList ::: 
alterConfigOp.configEntry.value.split(",").toList
+          val appendingValueList = 
alterConfigOp.configEntry.value.split(",").toList.filter(value => 
!oldValueList.contains(value))
+          val newValueList = oldValueList ::: appendingValueList

Review Comment:
   Do you know if there are any cases where the ordering of the list is 
significant?



##########
core/src/test/scala/unit/kafka/utils/TestUtils.scala:
##########
@@ -1127,6 +1127,23 @@ object TestUtils extends Logging {
       throw new IllegalStateException(s"Cannot get topic: $topic, partition: 
$partition in server metadata cache"))
   }
 
+  /**
+   * Wait until the kraft broker metadata have caught up to the controller
+   */
+  def waitForKRaftBrokerMetadataCatchupController(

Review Comment:
   I think we can make the name more concise. How about 
`ensureConsistentKRaftMetadata` or something like that?



##########
core/src/test/scala/unit/kafka/utils/TestUtils.scala:
##########
@@ -1127,6 +1127,23 @@ object TestUtils extends Logging {
       throw new IllegalStateException(s"Cannot get topic: $topic, partition: 
$partition in server metadata cache"))
   }
 
+  /**
+   * Wait until the kraft broker metadata have caught up to the controller
+   */
+  def waitForKRaftBrokerMetadataCatchupController(
+      brokers: Seq[KafkaBroker],
+      controllerServer: ControllerServer,
+      msg: String = "Timeout waiting for controller metadata propagating to 
brokers"
+  ): Unit = {
+    TestUtils.waitUntilTrue(
+      () => {
+        brokers.forall { broker =>
+          val metadataOffset = 
broker.metadataCache.asInstanceOf[KRaftMetadataCache].currentImage().highestOffsetAndEpoch().offset

Review Comment:
   Wonder if we should look at the last published offset?



##########
core/src/test/scala/unit/kafka/utils/TestUtils.scala:
##########
@@ -1127,6 +1127,23 @@ object TestUtils extends Logging {
       throw new IllegalStateException(s"Cannot get topic: $topic, partition: 
$partition in server metadata cache"))
   }
 
+  /**
+   * Wait until the kraft broker metadata have caught up to the controller
+   */
+  def waitForKRaftBrokerMetadataCatchupController(
+      brokers: Seq[KafkaBroker],
+      controllerServer: ControllerServer,
+      msg: String = "Timeout waiting for controller metadata propagating to 
brokers"
+  ): Unit = {
+    TestUtils.waitUntilTrue(
+      () => {
+        brokers.forall { broker =>
+          val metadataOffset = 
broker.metadataCache.asInstanceOf[KRaftMetadataCache].currentImage().highestOffsetAndEpoch().offset
+          metadataOffset >= 
controllerServer.raftManager.replicatedLog.endOffset().offset - 1

Review Comment:
   Would it make sense to fix the end offset at the start so that all brokers 
are catching up to the same point?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to