[
https://issues.apache.org/jira/browse/CASSANDRA-13975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17348795#comment-17348795
]
Paulo Motta commented on CASSANDRA-13975:
-----------------------------------------
Is there a plan for making this the default behavior? I don't see why it
shouldn't in a major version.
> Add a workaround for overly large read repair mutations
> -------------------------------------------------------
>
> Key: CASSANDRA-13975
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13975
> Project: Cassandra
> Issue Type: Bug
> Components: Legacy/Coordination
> Reporter: Aleksey Yeschenko
> Assignee: Aleksey Yeschenko
> Priority: Normal
> Fix For: 3.0.16, 3.11.2
>
>
> It's currently possible for {{DataResolver}} to accumulate more changes to
> read repair that would fit in a single serialized mutation. If that happens,
> the node receiving the mutation would fail, and the read would time out, and
> won't be able to proceed until the operator runs repair or manually drops the
> affected partitions.
> Ideally we should either read repair iteratively, or at least split the
> resulting mutation into smaller chunks in the end. In the meantime, for
> 3.0.x, I suggest we add logging to catch this, and a -D flag to allow
> proceeding with the requests as is when the mutation is too large, without
> read repair.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]