[
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600067#comment-14600067
]
Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:17 PM:
-----------------------------------------------------------------------
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
cass version: 2.1.6
was (Author: cowardlydragon):
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
cass version: 2.1.6
> Nodetool repair of very wide, large rows causes GC pressure and
> destabilization
> -------------------------------------------------------------------------------
>
> Key: CASSANDRA-9640
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
> Project: Cassandra
> Issue Type: Bug
> Environment: AWS, ~8GB heap
> Reporter: Constance Eustace
> Priority: Minor
> Fix For: 2.1.x
>
>
> We've noticed our nodes becoming unstable with large, unrecoverable Old Gen
> GCs until OOM.
> This appears to be around the time of repair, and the specific cause seems to
> be one of our report computation tables that involves possible very wide rows
> with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
> We truncate this occasionally, and we also had disabled this computation
> report for a bit and noticed better node stabiliy.
> I wish I had more specifics. We are switching to an RF 1 table and do more
> proactive truncation of the table.
> When things calm down, we will attempt to replicate the issue and watch GC
> and other logs.
> Any suggestion for things to look for/enable tracing on would be welcome.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)