showuon commented on code in PR #14289:
URL: https://github.com/apache/kafka/pull/14289#discussion_r1493974832


##########
core/src/main/scala/kafka/server/ReplicaManager.scala:
##########
@@ -1408,6 +1408,16 @@ class ReplicaManager(val config: KafkaConfig,
         // progress in such cases and don't need to report a 
`RecordTooLargeException`
         new FetchDataInfo(givenFetchedDataInfo.fetchOffsetMetadata, 
MemoryRecords.EMPTY)
       } else {
+        // For last entries we assume that it is hot enough to still have all 
data in page cache.
+        // Most of fetch requests are fetching from the tail of the log, so 
this optimization should save
+        // call of additional sendfile(2) targeting /dev/null for populating 
page cache significantly.
+        if (!givenFetchedDataInfo.isLastSegment && 
givenFetchedDataInfo.records.isInstanceOf[FileRecords]) {
+          try {
+            
givenFetchedDataInfo.records.asInstanceOf[FileRecords].prepareForRead()
+          } catch {
+            case e: Exception => warn("failed to prepare cache for read", e)

Review Comment:
   I'd argue if we need to log as WARN here since if this there's something 
wrong with the prepareForRead, the WARN logs will keep happening, but it won't 
impact the fetch at all, just have possible performance impact. Could we use 
INFO or maybe DEBUG level here? Also, add more info in the log message, maybe:
   
   `Failed to prepare cache for read for performance improvement. This can be 
ignored if the fetch behavior works without any issue.`
   
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to