[
https://issues.apache.org/jira/browse/CASSANDRA-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Blake Eggleston updated CASSANDRA-15569:
----------------------------------------
Since Version: 4.0-alpha
Source Control Link:
https://github.com/apache/cassandra/commit/247502c5d19c181bbe0a224da3ad6ebd0156f607
Resolution: Fixed
Status: Resolved (was: Ready to Commit)
+1, this fixes the potential double counting issue. Committed to trunk as
[247502c5d19c181bbe0a224da3ad6ebd0156f607
|https://github.com/apache/cassandra/commit/247502c5d19c181bbe0a224da3ad6ebd0156f607]
It wouldn’t be a bad idea to revisit whether we we want to update this metric
on failed or speculated reads (or writes). We want to speculate if it looks
like a read won’t be responded to quickly. Using the latency of reads that have
been speculated on, needed read repair, or timed out to inform our decision
seems like it would artificially inflate this number. It may turn out that this
is intentional, but it would be nice to have a comment explaining this if
that’s the case.
> StorageProxy updateCoordinatorWriteLatencyTableMetric can produce misleading
> metrics
> ------------------------------------------------------------------------------------
>
> Key: CASSANDRA-15569
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15569
> Project: Cassandra
> Issue Type: Bug
> Components: Consistency/Coordination, Legacy/Local Write-Read Paths,
> Observability/Metrics
> Reporter: David Capwell
> Assignee: David Capwell
> Priority: Normal
> Labels: pull-request-available
> Fix For: 4.0-alpha
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> If multiple mutations affect the same table, then metrics will get posted
> multiple times to the same table.
> [Circle
> CI|https://circleci.com/gh/dcapwell/cassandra/tree/coordinatorWriterMetricDoubleCounts]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]