To answer your question
org.apache.cassandra.metrics:type=Table,name=ReadTotalLatency can give you
the total local read latency in microseconds and you can get the count from
the Latency read metric.

If you are going to do that be sure to do it on the delta from previous
query (new - last) for both total latency and counter or else you will
slowly converge to a global average that will almost never change as the
quantity of reads simply removes outliers. The mean attribute of the
Latency metric you mentioned will give you an approximation for this
actually as its taking the total/count of a decaying histogram of the
latencies. It will however be even less accurate than using the deltas
since the bounds of the decaying wont necessarily match up with your
reading intervals and histogram introduces a worst case 20% round up. Even
with using deltas though this will hide outliers, you could end up with
really bad queries that don't even show up as a tick on your graph
(although *generally* it will).

Chris

On Wed, May 29, 2019 at 9:32 AM shalom sagges <shalomsag...@gmail.com>
wrote:

> Hi All,
>
> I'm creating a dashboard that should collect read/write latency metrics on
> C* 3.x.
> In older versions (e.g. 2.0) I used to divide the total read latency in
> microseconds with the read count.
>
> Is there a metric attribute that shows read/write latency without the need
> to do the math, such as in nodetool tablestats "Local read latency" output?
> I saw there's a Mean attribute in org.apache.cassandra.metrics.ReadLatency
> but I'm not sure this is the right one.
>
> I'd really appreciate your help on this one.
> Thanks!
>
>
>

Reply via email to