Github user tdas commented on the issue:
https://github.com/apache/spark/pull/21469
I am having a second thoughts about this. Exposing the entire memory usage
of all the loaded maps as another custom metric .... just adds more confusion.
Rather the point of the the main state metric `memoryUsedBytes` is to capture
how much memory is occupied because of the one partition of the state, and that
implicitly should cover all the loaded versions of that state partition. So I
strongly feel that instead of adding a custom metric, we should change the
existing `memoryUsedBytes` to capture all the memory.
I am fine adding the custom metrics hit and miss counts. No questions about
that.
What do you think?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]