michaeljmarshall commented on code in PR #815:
URL: https://github.com/apache/pulsar-client-go/pull/815#discussion_r931568426


##########
pulsar/producer_partition.go:
##########
@@ -419,6 +419,9 @@ func (p *partitionProducer) reconnectToBroker() {
                if maxRetry > 0 {
                        maxRetry--
                }
+               if maxRetry == 0 || backoff.IsMaxBackoffReached() {
+                       p.metrics.ProducersReconnectFailure.Inc()

Review Comment:
   Same comment here, but for the producer case.



##########
pulsar/consumer_partition.go:
##########
@@ -1155,6 +1155,9 @@ func (pc *partitionConsumer) reconnectToBroker() {
                if maxRetry > 0 {
                        maxRetry--
                }
+               if maxRetry == 0 || backoff.IsMaxBackoffReached() {
+                       pc.metrics.ConsumersReconnectFailure.Inc()

Review Comment:
   This will only increment when the max retries have been exhausted, which 
won't occur when unlimited retries are enabled. I think it'd be reasonable to 
increment this metric for each failed attempt to reconnect to the broker. 
   
   > Because reconnecting to broker by producer/consumer creation has doubling 
back off retry, to reduce excessive retry failure noise, these two counters 
will only incremented by either of two conditions are met.
   
   Is there a reason you view regular (non final) failure as noise? Given that 
we're willing to log that the consumer failed to connect, I think it's 
reasonable to increment the metric.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to