[ 
https://issues.apache.org/jira/browse/FLINK-34558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17824593#comment-17824593
 ] 

Hangxiang Yu commented on FLINK-34558:
--------------------------------------

I think this mertic maybe useful, but since it's under the critical path of 
every element, we should treat it very carefully.

So I'd suggest to start with:
 # try to implement this just like state-latency track (just sampling and be 
disabled by default).
 # micro benchmark and show result firstly (maybe three results: before this 
pr, disabed, enabled).

> Add RocksDB key/value size metrics
> ----------------------------------
>
>                 Key: FLINK-34558
>                 URL: https://issues.apache.org/jira/browse/FLINK-34558
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / State Backends
>    Affects Versions: 1.19.0
>            Reporter: Jufang He
>            Priority: Major
>
> In some scenarios, the poor performance of RocksDB may be caused by too large 
> key/value size, but now there is a lack of metrics for key/value size. By 
> adding these metrics, we can conveniently calculate the distribution of 
> key/value size, such as the average size and p99 size.  To reduce the 
> negative impact of adding metric on RocksDB performance, we could reduce the 
> impact by supporting sampling.
> The possible implementation is as follows: After the RocksDB key/value 
> serialization, we could obtain the byte array and report the size of the 
> array through histogram metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to