Hey Bruno,

Thanks for the KIP! I have one high-level concern, which is that we should
consider
reporting these metrics on the per-store level rather than instance-wide. I
know I was
the one who first proposed making it instance-wide, so bear with me:

While I would still argue that the instance-wide memory usage is probably
the most *useful*,
exposing them at the store-level does not prevent users from monitoring the
instance-wide
memory. They should be able to roll up all the store-level metrics on an
instance to
compute the total off-heap memory. But rolling it up for the users does
prevent them from
using this to debug rare cases where one store may be using significantly
more memory than
expected.

It's also worth considering that some users may be using the bounded memory
config setter
to put a cap on the off-heap memory of the entire process, in which case
the memory usage
metric for any one store should reflect the memory usage of the entire
instance. In that case
any effort to roll up the memory usages ourselves would just be wasted.

Sorry for the reversal, but after a second thought I'm pretty strongly in
favor of reporting these
at the store level.

Best,
Sophie

On Wed, May 6, 2020 at 8:41 AM Bruno Cadonna <br...@confluent.io> wrote:

> Hi all,
>
> I'd like to discuss KIP-607 that aims to add RocksDB memory usage
> metrics to Kafka Streams.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-607%3A+Add+Metrics+to+Record+the+Memory+Used+by+RocksDB+to+Kafka+Streams
>
> Best,
> Bruno
>

Reply via email to