[ 
https://issues.apache.org/jira/browse/CASSANDRA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-724:
-------------------------------------

    Attachment: debug.patch

patch to add debug timing info if you want to investigate further.

there does seem to be occasional latency spikes inside ColumnFamilyStore.apply 
that I do not yet understand.

when cpus are busy w/ compaction latency increases.  no real surprise there.

thrift sometimes adds 10s of ms of latency according to the differences b/t 
what my python client sees and what CassandraServer sees.  the java side of 
thrift does call setTcpNoDelay(true), but the python side does not -- the 
equivalent would be, setsockopt(SOL_TCP, TCP_NODELAY, 1).  that is probably the 
culprit.


> Insert/Get Contention
> ---------------------
>
>                 Key: CASSANDRA-724
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-724
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Chris Goffinet
>            Assignee: Jonathan Ellis
>             Fix For: 0.6
>
>         Attachments: 724.patch, debug.patch, test_case.py
>
>
> We tried out the socket io patch in CASSANDRA-705, tested the latest JVM of 
> b18 for 1.6. Still seeing very strange insert times. We see this with 
> get_slices as well but it's easy to reproduce with batch_insert. I wonder if 
> its related to Memtable contention, it's pretty easy to see the slow times 
> when you restart the test script attached. We are running this on a 7 node 
> cluster, <1% cpu. Consistency Level of 1.
> Results
> ---------------------
> Slow insert test.10882 0.203548192978
> Slow insert test.18005 0.203876972198
> Slow insert test.21154 0.204496860504
> Slow insert test.22054 0.0444049835205
> Slow insert test.26445 0.201545000076

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to