neilramaswamy commented on code in PR #48862:
URL: https://github.com/apache/spark/pull/48862#discussion_r1847086603
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TransformWithStateExec.scala:
##########
@@ -416,6 +429,8 @@ case class TransformWithStateExec(
StatefulOperatorCustomSumMetric("numMapStateVars", "Number of map state
variables"),
StatefulOperatorCustomSumMetric("numDeletedStateVars", "Number of
deleted state variables"),
// metrics around timers
+ StatefulOperatorCustomSumMetric("timerProcessingTimeMs",
+ "Number of milliseconds taken to process all timers"),
Review Comment:
I could add a presence test, but I'm not sure it's very useful. Here's what
we could do: we could write a state processor that sleeps 500ms for every timer
that fires, and we could schedule `k` timers (for some very small `k`, as to
not make tests take so much to pass). Then, we can assert that
`timerProcessingTimeMs > 500k`. Wdyt?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]