asafm commented on code in PR #20859:
URL: https://github.com/apache/pulsar/pull/20859#discussion_r1275878865


##########
pip/pip-285.md:
##########
@@ -0,0 +1,73 @@
+# Background knowledge
+Existing monitoring items in Pulsar, such as "pulsar_subscription_back_log" 
and "pulsar_subscription_back_log_no_delayed," provide valuable insights into 
the quantity of backlogged messages. However, they lack a metric that directly 
measures the duration of message backlog. Monitoring the duration of backlog is 
vital as it allows us to understand the persistence of message accumulation 
within a subscription over time.
+
+# Motivation
+
+The motivation behind introducing the new monitoring item 
"pulsar_subscription_backlog_duration" is to effectively monitor the health of 
subscriptions within the Pulsar messaging system. This health metric represents 
whether there are messages that have not been successfully acknowledged (ACKed) 
and potential consumer-side issues. Additionally, this monitoring item allows 
us to configure alerting mechanisms, ensuring timely notifications to users, 
thereby facilitating proactive response to potential issues.
+
+Maintaining the health of subscriptions is of paramount importance for the 
smooth operation of a messaging system. As message delivery involves 
interaction between producers and consumers, backlogs or unacknowledged 
messages can lead to data delays or losses. By monitoring 
"pulsar_subscription_backlog_duration," we can gain real-time insights into the 
duration of message backlogs and promptly detect any potential processing 
issues within subscriptions.
+
+The configuration and alerting settings for this monitoring item play a 
crucial role in responding swiftly to issues. When 
"pulsar_subscription_backlog_duration" indicates an abnormal increase in 
duration or unusual message backlogs, system administrators receive immediate 
alert notifications. These alerts enable administrators to quickly identify 
problems and take necessary measures to prevent message losses or further 
delays.
+
+In conclusion, the introduction of the "pulsar_subscription_backlog_duration" 
monitoring item enables effective monitoring of subscription health, real-time 
issue detection, and prevention of message delivery delays and losses. 
Additionally, timely alerting mechanisms empower proactive responses, ensuring 
the reliability and efficiency of the messaging system. This is essential for 
providing high-quality message delivery services, ensuring user experiences, 
and maintaining data integrity.
+
+# Goals
+
+## In Scope
+
+* SubscriptionStatsImpl add this stat
+* Metrics
+
+
+## Out of Scope
+
+* Implementing changes to the core functionality of the Pulsar messaging 
system itself.
+* Not include `NonPersistentTopic`.
+* Not include `DelayMessage`
+
+# High Level Design
+
+* add config `subscriptionBacklogDurationEnabled` in `broker.conf`
+  * note: because we need to read the markDelete position next position 
message, it will consume performance when the message is not in the cache, so 
add this flag
+* `SubscriptionStatsImpl` add `backlogDuration` variable
+* `AggregatedSubscriptionStats` add `backlogDuration` variable
+* add metric iterm named `pulsar_subscription_back_log_duration`
+
+# Detailed Design
+
+
+## Design & Implementation Details
+* when `PersistentSubscription` invoke getStats then reade the (`markDelete` + 
1) entry convert to `MessageMetadata` `publish_time` to represent the 
`earliestUnAckMessagePublishTime`

Review Comment:
   First the idea is inspired by https://github.com/seglo/kafka-lag-exporter.
   We used it in Logz.io and it gave awesome results. The error was tiny.
   
   Can you please explain why it will cause a large error?
   I don't agree with you about the cache. Given a large backlog - which is 
exactly *When* you want the alarm to sound, it will not be in the cache, since 
you're too far behind.
   
   I think it doesn't make sense for a /metrics call to perform I/O call. It 
should return super fast. Iterating over all existing subscription and doing a 
single I/O call for each would take way too long, given a large backlog in each.
   
   Think that a "normal" system, you close ledger quite rapidly, which means 
the diff between the 1st message publish time to the last won't be that big 
anyway. So the "error" of estimating is not that big. 2nd, you can save 10 
timestamps, which mean you slice the ledger into 1 pieces (virtually of course) 
which also minimizes the error you will likely to have from estimation.
   
   Even in the case you suddenly have 10 seconds of no message publishing and 
then you publish, the TTL for a leger closing wil kick in and save you from 
having too large between them. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to