[
https://issues.apache.org/jira/browse/CASSANDRA-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815672#comment-17815672
]
Andres de la Peña commented on CASSANDRA-17787:
-----------------------------------------------
CASSANDRA-19336 has added a new {{concurrent_merkle_tree_requests}} config
property that limits the number of concurrent Merkle tree requests and their
merging. Thus, the space taken by the Merkle trees of a repair command should
never be higher than
{{{}concurrent_merkle_tree_requests{}}}*{{{}repair_session_space{}}}, even if
that command includes multiple tables or virtual nodes. This is only applied to
the Merkle tree part of repair, and the rest of the steps preserve the same
parallelism that they had.
We can probably close this as a duplicate.
> Full repair on a keyspace with a large amount of tables causes OOM
> ------------------------------------------------------------------
>
> Key: CASSANDRA-17787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17787
> Project: Cassandra
> Issue Type: Bug
> Components: Consistency/Repair
> Reporter: Brandon Williams
> Priority: Normal
> Labels: lhf
> Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x
>
>
> Running a nodetool repair -pr --full on a keyspace with a few hundred tables
> will cause a direct memory OOM, or lots of heap pressure with
> use_offheap_merkle_trees: false. Adjusting repair_session_space_in_mb does
> not seem to help. From an initial look at a heap dump, it appears to node is
> holding many _remote_ trees in memory.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]