How does compaction_throughput relate to memory usage?
It reduces the rate of memory allocation.
e.g. Say normally ParNew can keep up with the rate of memory usage without
stopping for too long: so the rate of promotion is low'ish and every thing is
allocated to Eden. If the allocation rate
Regarding memory usage after a repair ... Are the merkle trees kept around?
They should not be.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 24/10/2012, at 4:51 PM, B. Todd Burruss bto...@gmail.com wrote:
Regarding memory usage
This sounds very much like my heap is so consumed by (mostly) bloom
filters that I am in steady state GC thrash.
Yes, I think that was at least part of the issue.
The rough numbers I've used to estimate working set are:
* bloom filter size for 400M rows at 0.00074 fp without java fudge
On Thu, Oct 25, 2012 at 4:15 AM, aaron morton aa...@thelastpickle.comwrote:
This sounds very much like my heap is so consumed by (mostly) bloom
filters that I am in steady state GC thrash.
Yes, I think that was at least part of the issue.
The rough numbers I've used to estimate working
On Mon, Oct 22, 2012 at 8:38 AM, Bryan Talbot btal...@aeriagames.com wrote:
The nodes with the most data used the most memory. All nodes are affected
eventually not just one. The GC was on-going even when the nodes were not
compacting or running a heavy application load -- even when the main
On Wed, Oct 24, 2012 at 2:38 PM, Rob Coli rc...@palominodb.com wrote:
On Mon, Oct 22, 2012 at 8:38 AM, Bryan Talbot btal...@aeriagames.com
wrote:
The nodes with the most data used the most memory. All nodes are
affected
eventually not just one. The GC was on-going even when the nodes
These GC settings are the default (recommended?) settings from
cassandra-env. I added the UseCompressedOops.
-Bryan
On Mon, Oct 22, 2012 at 6:15 PM, Will @ SOHO w...@voodoolunchbox.comwrote:
On 10/22/2012 09:05 PM, aaron morton wrote:
# GC tuning options
JVM_OPTS=$JVM_OPTS
On Mon, Oct 22, 2012 at 6:05 PM, aaron morton aa...@thelastpickle.comwrote:
The GC was on-going even when the nodes were not compacting or running a
heavy application load -- even when the main app was paused constant the GC
continued.
If you restart a node is the onset of GC activity
Regarding memory usage after a repair ... Are the merkle trees kept around?
On Oct 23, 2012 3:00 PM, Bryan Talbot btal...@aeriagames.com wrote:
On Mon, Oct 22, 2012 at 6:05 PM, aaron morton aa...@thelastpickle.comwrote:
The GC was on-going even when the nodes were not compacting or running a
If you are using the default settings I would try to correlate the GC activity
with some application activity before tweaking.
If this is happening on one machine out of 4 ensure that client load is
distributed evenly.
See if the raise in GC activity us related to Compaction, repair or an
The memory usage was correlated with the size of the data set. The nodes
were a bit unbalanced which is normal due to variations in compactions.
The nodes with the most data used the most memory. All nodes are affected
eventually not just one. The GC was on-going even when the nodes were not
The GC was on-going even when the nodes were not compacting or running a
heavy application load -- even when the main app was paused constant the GC
continued.
If you restart a node is the onset of GC activity correlated to some event?
As a test we dropped the largest CF and the memory
On 10/22/2012 09:05 PM, aaron morton wrote:
# GC tuning options
JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC
JVM_OPTS=$JVM_OPTS -XX:+UseConcMarkSweepGC
JVM_OPTS=$JVM_OPTS -XX:+CMSParallelRemarkEnabled
JVM_OPTS=$JVM_OPTS -XX:SurvivorRatio=8
JVM_OPTS=$JVM_OPTS -XX:MaxTenuringThreshold=1
JVM_OPTS=$JVM_OPTS
Dne 18.10.2012 20:06, Bryan Talbot napsal(a):
In a 4 node cluster running Cassandra 1.1.5 with sun jvm 1.6.0_29-b11
(64-bit), the nodes are often getting stuck in state where CMS
collections of the old space are constantly running.
you need more java heap memory
ok, let me try asking the question a different way ...
How does cassandra use memory and how can I plan how much is needed? I
have a 1 GB memtable and 5 GB total heap and that's still not enough even
though the number of concurrent connections and garbage generation rate is
fairly low.
If I
15 matches
Mail list logo