chia7712 commented on code in PR #16614:
URL: https://github.com/apache/kafka/pull/16614#discussion_r1683662367


##########
storage/src/main/java/org/apache/kafka/storage/internals/epoch/LeaderEpochFileCache.java:
##########
@@ -348,7 +348,8 @@ public void truncateFromEndAsyncFlush(long endOffset) {
                 // - We still flush the change in #assign synchronously, 
meaning that it's guaranteed that the checkpoint file always has no missing 
entries.
                 //   * Even when stale epochs are restored from the checkpoint 
file after the unclean shutdown, it will be handled by
                 //     another truncateFromEnd call on log loading procedure, 
so it won't be a problem
-                scheduler.scheduleOnce("leader-epoch-cache-flush-" + 
topicPartition, this::writeToFileForTruncation);
+                List<EpochEntry> entries = new ArrayList<>(epochs.values());
+                scheduler.scheduleOnce("leader-epoch-cache-flush-" + 
topicPartition, () -> checkpoint.writeForTruncation(entries));

Review Comment:
   I file the jiras to address all comments
   
   > rewrite testLogRecoveryMetrics by NoOpScheduler
   
   https://issues.apache.org/jira/browse/KAFKA-17166
   
   > Moving the flush call inside the read lock fixes this issue,
   
   https://issues.apache.org/jira/browse/KAFKA-17167
   
   > Yeah, could be an issue in some cases (e.g. deleteRecords is called 
frequently, and/or kafka-schedulers are busy) though.
   
   If this is a true issue, `LeaderEpochFileCache` should have its own 
scheduler instead of shared scheduler.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to