ocadaruma commented on code in PR #16614:
URL: https://github.com/apache/kafka/pull/16614#discussion_r1683651048


##########
storage/src/main/java/org/apache/kafka/storage/internals/epoch/LeaderEpochFileCache.java:
##########
@@ -348,7 +348,8 @@ public void truncateFromEndAsyncFlush(long endOffset) {
                 // - We still flush the change in #assign synchronously, 
meaning that it's guaranteed that the checkpoint file always has no missing 
entries.
                 //   * Even when stale epochs are restored from the checkpoint 
file after the unclean shutdown, it will be handled by
                 //     another truncateFromEnd call on log loading procedure, 
so it won't be a problem
-                scheduler.scheduleOnce("leader-epoch-cache-flush-" + 
topicPartition, this::writeToFileForTruncation);
+                List<EpochEntry> entries = new ArrayList<>(epochs.values());
+                scheduler.scheduleOnce("leader-epoch-cache-flush-" + 
topicPartition, () -> checkpoint.writeForTruncation(entries));

Review Comment:
   @junrao IMO the follow-up doesn't completely beat #14242, because flush is 
still executed in the background.
   lock contention only arises when the method (which requests writeLock) is 
called subsequently during asyncFlush (for previous method call) is ongoing.
   
   Yeah, could be an issue in some cases (e.g. deleteRecords is called 
frequently, and/or kafka-schedulers are busy)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to