lucasbru commented on code in PR #20325:
URL: https://github.com/apache/kafka/pull/20325#discussion_r2324653887


##########
core/src/main/scala/kafka/server/AutoTopicCreationManager.scala:
##########
@@ -50,21 +52,122 @@ trait AutoTopicCreationManager {
 
   def createStreamsInternalTopics(
     topics: Map[String, CreatableTopic],
-    requestContext: RequestContext
+    requestContext: RequestContext,
+    timeoutMs: Long
   ): Unit
 
+  def getStreamsInternalTopicCreationErrors(
+    topicNames: Set[String],
+    currentTimeMs: Long
+  ): Map[String, String]
+
+  def close(): Unit = {}
+
+}
+
+/**
+ * Thread-safe cache that stores topic creation errors with per-entry 
expiration.
+ * - Expiration: maintained by a min-heap (priority queue) on expiration time
+ * - Capacity: enforced by insertion-order removal (keeps the most recently 
inserted entries)
+ */
+private[server] class ExpiringErrorCache(maxSize: Int, time: Time) {
+
+  private case class Entry(topicName: String, errorMessage: String, 
expirationTimeMs: Long)
+
+  private val byTopic = new java.util.HashMap[String, Entry]()
+  private val expiryQueue = new java.util.PriorityQueue[Entry](11, new 
java.util.Comparator[Entry] {
+    override def compare(a: Entry, b: Entry): Int = 
java.lang.Long.compare(a.expirationTimeMs, b.expirationTimeMs)
+  })
+  private val lock = new ReentrantLock()

Review Comment:
   Can we make `byTopic` a ConcurrentHashMap and use this lock as a write lock 
only?
   That is, make the read path lock contention free? That would mean we can 
only expire on the put path, which should be fine. However, then we may read 
expired entries when getting from the map, so in `get` you need to check if the 
returned entry is expired before returning it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to