[
https://issues.apache.org/jira/browse/CASSANDRA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12803922#action_12803922
]
Jonathan Ellis commented on CASSANDRA-724:
------------------------------------------
Brandon's did some more testing and found that the System.gc() we request (to
allow cleaning up obsolete sstables after a compaction) is the culprit.
Maybe it's time to experiment w/ the g1 garbage collector:
http://java.sun.com/javase/technologies/hotspot/gc/g1_intro.jsp
Alternatively, one workaround might be to only issue the gc() request if we're
within some percent of the disk filling up (we can use File.getUsableSpace /
File.getTotalSpace for that)
> Insert/Get Contention
> ---------------------
>
> Key: CASSANDRA-724
> URL: https://issues.apache.org/jira/browse/CASSANDRA-724
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Reporter: Chris Goffinet
> Assignee: Jonathan Ellis
> Fix For: 0.6
>
> Attachments: 724.patch, debug.patch, test_case.py
>
>
> We tried out the socket io patch in CASSANDRA-705, tested the latest JVM of
> b18 for 1.6. Still seeing very strange insert times. We see this with
> get_slices as well but it's easy to reproduce with batch_insert. I wonder if
> its related to Memtable contention, it's pretty easy to see the slow times
> when you restart the test script attached. We are running this on a 7 node
> cluster, <1% cpu. Consistency Level of 1.
> Results
> ---------------------
> Slow insert test.10882 0.203548192978
> Slow insert test.18005 0.203876972198
> Slow insert test.21154 0.204496860504
> Slow insert test.22054 0.0444049835205
> Slow insert test.26445 0.201545000076
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.