FrankYang0529 commented on code in PR #17199:
URL: https://github.com/apache/kafka/pull/17199#discussion_r1802621948


##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java:
##########
@@ -1740,14 +1742,19 @@ private void subscribeInternal(Collection<String> 
topics, Optional<ConsumerRebal
      * It is possible that {@link ErrorEvent an error}
      * could occur when processing the events. In such cases, the processor 
will take a reference to the first
      * error, continue to process the remaining events, and then throw the 
first error that occurred.
+     *
+     * Visible for testing.
      */
-    private boolean processBackgroundEvents() {
+    boolean processBackgroundEvents() {
         AtomicReference<KafkaException> firstError = new AtomicReference<>();
 
         LinkedList<BackgroundEvent> events = new LinkedList<>();
         backgroundEventQueue.drainTo(events);
+        
kafkaConsumerMetrics.recordBackgroundEventQueueSize(backgroundEventQueue.size());
 
         for (BackgroundEvent event : events) {
+            
kafkaConsumerMetrics.recordBackgroundEventQueueTime(time.milliseconds() - 
event.addedToQueueMs());
+            long startMs = time.milliseconds();

Review Comment:
   No, this metric determine how long a background event is taking to be 
dequeued. In `BackgroundEventHandler#add`, we run 
`BackgroundEvent#setAddedToQueueMs`. When we start to process it, we can use 
`current time - event.addedToQueueMs` to know how long an event is in the queue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to