25, 2011 12:55 PM
To: user@cassandra.apache.org
Subject: RE: OOM on heavy write load
How large are your rows? binary_memtable_throughput_in_
mb only tracks size of data, but there is an overhead associated with
each row on the order of magnitude of a few KBs. If your row data sizes
Could this be related as well to
https://issues.apache.org/jira/browse/CASSANDRA-2463?
My gut feel: Maybe, if the slowness/timeouts reported by the OP are
intermixed with periods of activity to indicate compacting full gc.
OP: Check if cassandra is going into 100% (not less, not more) CPU
My gut feel: Maybe, if the slowness/timeouts reported by the OP are
intermixed with periods of activity to indicate compacting full gc.
But even then, after taking a single full GC the behavior should
disappear since there should be no left-overs from the smaller columns
causing fragmentation
for GC.
From: Shu Zhang [szh...@mediosystems.com]
Sent: Monday, April 25, 2011 12:55 PM
To: user@cassandra.apache.org
Subject: RE: OOM on heavy write load
How large are your rows? binary_memtable_throughput_in_
mb only tracks size of data
size.
From: Nikolay Kоvshov [nkovs...@yandex.ru]
Sent: Monday, April 25, 2011 5:21 AM
To: user@cassandra.apache.org
Subject: Re: OOM on heavy write load
I assume if I turn off swap it will just die earlier, no ? What is the
mechanism of dying ?
From the link
Zhang [szh...@mediosystems.com]
Sent: Monday, April 25, 2011 12:55 PM
To: user@cassandra.apache.org
Subject: RE: OOM on heavy write load
How large are your rows? binary_memtable_throughput_in_
mb only tracks size of data, but there is an overhead associated with each row
on the order of magnitude
(0) turn off swap
(1)
http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors
On Fri, Apr 22, 2011 at 8:00 AM, Nikolay Kоvshov nkovs...@yandex.ru wrote:
I am using Cassandra 0.7.0 with following settings
binary_memtable_throughput_in_mb: 64