[
https://issues.apache.org/jira/browse/CASSANDRA-13975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249540#comment-16249540
]
Aleksey Yeschenko commented on CASSANDRA-13975:
-----------------------------------------------
Thanks, committed as
[f1e850a492126572efc636a6838cff90333806b9|https://github.com/apache/cassandra/commit/f1e850a492126572efc636a6838cff90333806b9]
to 3.0 and merged up with 3.11 and trunk.
> Add a workaround for overly large read repair mutations
> -------------------------------------------------------
>
> Key: CASSANDRA-13975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13975
> Project: Cassandra
> Issue Type: Bug
> Components: Coordination
> Reporter: Aleksey Yeschenko
> Assignee: Aleksey Yeschenko
> Fix For: 3.0.16, 3.11.2
>
>
> It's currently possible for {{DataResolver}} to accumulate more changes to
> read repair that would fit in a single serialized mutation. If that happens,
> the node receiving the mutation would fail, and the read would time out, and
> won't be able to proceed until the operator runs repair or manually drops the
> affected partitions.
> Ideally we should either read repair iteratively, or at least split the
> resulting mutation into smaller chunks in the end. In the meantime, for
> 3.0.x, I suggest we add logging to catch this, and a -D flag to allow
> proceeding with the requests as is when the mutation is too large, without
> read repair.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]