venkateshbalaji99 commented on PR #41199:
URL: https://github.com/apache/spark/pull/41199#issuecomment-1690283871

   Hi @paymog , as you have pointed out, this seems to be an issue in flink 
too, where the metric value is not being decremented upon each push, while 
statsd expects it to. But the motivation behind not wanting to add a decrement 
logic here is because this system is more resilient towards intermittent 
failures (like some packets being lost), since the total value will still be 
retained, whereas in the case where we opt to send delta ourselves, these kind 
of error might start up adding  up over time. Since gauge metrics match our use 
case fully, I think remapping the spark counter metrics to be interpreted as 
gauge by statsD would be the simplest solution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to