[jira] [Commented] (CASSANDRA-6400) Update unit tests to use latest Partitioner

2013-11-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832413#comment-13832413
 ] 

Gösta Forsum commented on CASSANDRA-6400:
-

Why ?

Changing to Murmur3Partitioner casues approx 35 unit tests to fail. 
Examining some of failing unit test reveals that they rely heavily on the fact 
the ByteOrderedPartitioner is used. 

Changing to Murmur3Partitioner would increase the complexity of these tests.
(For example: StorageProxyTest.testGRR)

there are separate tests for Murmur3Partitioner.

Testing complicated combinations of classes is more a integration test task.
It is better to have simple unit tests testing the parts and then a few 
integration tests that try likely combinations of them.

just my 2 cents



 Update unit tests to use latest Partitioner
 ---

 Key: CASSANDRA-6400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6400
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Alex Liu
Priority: Minor

 test/conf/cassandra.yaml uses the out-date ByteOrderedPartitioner, we should 
 update it to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6400) Update unit tests to use latest Partitioner

2013-11-26 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832421#comment-13832421
 ] 

Alex Liu commented on CASSANDRA-6400:
-

It is good to update it to the latest partitioner for long run, for 
MurmursPartitioner is latest one. By using the old ByteOrderedPartitioner, I 
ran into issue for my new Pig unit tests. The ticket is opened to let us aware 
that we are still using the old partitioner as default cassandra unit testing.

Anyway, it's a low priority thing. I was surprise that it still uses the old 
partitioner as default.

 Update unit tests to use latest Partitioner
 ---

 Key: CASSANDRA-6400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6400
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Alex Liu
Priority: Minor

 test/conf/cassandra.yaml uses the out-date ByteOrderedPartitioner, we should 
 update it to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832445#comment-13832445
 ] 

Ondřej Černoš commented on CASSANDRA-5493:
--

_listen_address_ is set to the address configured on eth0, _broadcast_address_ 
is set to the elastic IP assigned to the instance. I use the 
Ec2MultiRegionSnitch. Everything works well. 

 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

```
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
```

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

```
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

```


Please let me know if I can provide any further information. I can provide the 
hprof if desired, however it is 3GB so I'll need to provide it outside of JIRA.

  was:
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 

[jira] [Created] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)
Jason Harvey created CASSANDRA-6405:
---

 Summary: When making heavy use of counters, neighbor nodes 
occasionally enter spiral of constant memory consumpion
 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
Sun Java 7.
Xmx of 8G.
No row cache.
Reporter: Jason Harvey


We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; the three nodes immediately start hogging up memory again and CMSing 
constantly. 

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

```
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
```

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

```
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

```


Please let me know if I can provide any further information. I can provide the 
hprof if desired, however it is 3GB so I'll need to provide it outside of JIRA.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

{code}


Please let me know if I can provide any further information. I can provide the 
hprof if desired, however it is 3GB so I'll need to provide it outside of JIRA.

  was:
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after it 
rolled back. I've repeated this process on each version now, and it is 
consistent every time. I should note this incident does seem to happen more 
rarely on 1.2.11 compared to the previous versions.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

{code}


Please let me know if I can provide any further information. I can provide the 
hprof if desired, however it is 3GB so I'll 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it is consistent every time. I should note this incident 
does seem to happen more rarely on 1.2.11 compared to the previous versions.


I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

{code}



[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it is consistent every time. I should note this incident 
does seem to happen more rarely on 1.2.11 compared to the previous versions.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

{code}


Please let me know if I can provide any further information. I can provide the 
hprof if 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it is consistent every time. I should note this incident 
does seem to happen more rarely on 1.2.11 compared to the previous versions.


I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts. Here are the top 5 counts for this one 
node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five minutes): 0.0

{code}


Please let me know if I can provide any further information. I can provide the 
hprof if desired, however it is 3GB so I'll need to provide it outside of JIRA.

  

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it is consistent every time. I should note this incident 
does seem to happen more rarely on 1.2.11 compared to the previous versions.


I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts as reported by jhat. Here are the top 5 
counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30
Compacted row maximum size: 4866323
Compacted row mean size: 7742
Average live cells per slice (last five minutes): 39.0
Average tombstones per slice (last five 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it is consistent every time. I should note this incident 
does seem to happen more rarely on 1.2.11 compared to the previous versions.

This incident has been consistent across multiple different types of hardware, 
as well as major kernel version changes (2.6 all the way to 3.2). The OS is 
operating normally during the event.


I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts as reported by jhat. Here are the top 5 
counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it consistently rears its head every time. I should note 
this incident does *seem* to happen more rarely on 1.2.11 compared to the 
previous versions.

This incident has been consistent across multiple different types of hardware, 
as well as major kernel version changes (2.6 all the way to 3.2). The OS is 
operating normally during the event.


I managed to get an hprof dump if the issue happening in the wild. Something 
notable in the class instance counts as reported by jhat. Here are the top 5 
counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 30

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it consistently rears its head every time. I should note 
this incident does *seem* to happen more rarely on 1.2.11 compared to the 
previous versions.

This incident has been consistent across multiple different types of hardware, 
as well as major kernel version changes (2.6 all the way to 3.2). The OS is 
operating normally during the event.


I managed to get an hprof dump when the issue was happening in the wild. 
Something notable in the class instance counts as reported by jhat. Here are 
the top 5 counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it consistently rears its head every time. I should note 
this incident does *seem* to happen more rarely on 1.2.11 compared to the 
previous versions.

This incident has been consistent across multiple different types of hardware, 
as well as major kernel version changes (2.6 all the way to 3.2). The OS is 
operating normally during the event.


I managed to get an hprof dump when the issue was happening in the wild. 
Something notable in the class instance counts as reported by jhat. Here are 
the top 5 counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal or expected for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403
Bloom Filter Space Used: 523952
Compacted row minimum size: 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)
  Description: 
We're randomly running into an interesting issue on our ring. When making use 
of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start 
immediately filling up memory, CMSing, fill up again, repeat. This pattern goes 
on for 5-20 minutes. Nearly all requests to the nodes time out during this 
period. Restarting one, two, or all three of the nodes does not resolve the 
spiral; after a restart the three nodes immediately start hogging up memory 
again and CMSing constantly.

When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
trashed for 20, and repeat that cycle a few times.

There are no unusual logs provided by cassandra during this period of time, 
other than recording of the constant dropped read requests and the constant CMS 
runs. I have analyzed the log files prior to multiple distinct instances of 
this issue and have found no preceding events which are associated with this 
issue.

I have verified that our apps are not performing any unusual number or type of 
requests during this time.

This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

The way I've narrowed this down to counters is a bit naive. It started 
happening when we started making use of counter columns, went away after we 
rolled back use of counter columns. I've repeated this attempted rollout on 
each version now, and it consistently rears its head every time. I should note 
this incident does _seem_ to happen more rarely on 1.2.11 compared to the 
previous versions.

This incident has been consistent across multiple different types of hardware, 
as well as major kernel version changes (2.6 all the way to 3.2). The OS is 
operating normally during the event.


I managed to get an hprof dump when the issue was happening in the wild. 
Something notable in the class instance counts as reported by jhat. Here are 
the top 5 counts for this one node:

{code}
5967846 instances of class org.apache.cassandra.db.CounterColumn 
1247525 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
1246648 instances of class 
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
{code}

Is it normal or expected for CounterColumn to have that number of instances?

The data model for how we use counters is as follows: between 50-2 counter 
columns per key. We currently have around 3 million keys total, but this issue 
also replicated when we only had a few thousand keys total. Average column 
count is around 1k, and 90th is 18k. New columns are added regularly, and 
columns are incremented regularly. No column or key deletions occur. We 
probably have 1-5k hot keys at any given time, spread across the entire ring. 
R:W ratio is typically around 50:1. This is the only CF we're using counters 
on, at this time. CF details are as follows:

{code}
ColumnFamily: CommentTree
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: 
org.apache.cassandra.db.marshal.CounterColumnType
  Cells sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.01
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
sstable_size_in_mb: 160



Column Family: CommentTree
SSTable count: 30
SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
Space used (live): 4656930594
Space used (total): 4677221791
SSTable Compression Ratio: 0.0
Number of Keys (estimate): 679680
Memtable Columns Count: 8289
Memtable Data Size: 2639908
Memtable Switch Count: 5769
Read Count: 185479324
Read Latency: 1.786 ms.
Write Count: 5377562
Write Latency: 0.026 ms.
Pending Tasks: 0
Bloom Filter False Positives: 2914204
Bloom Filter False Ratio: 0.56403

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Environment: 
RF of 3, 15 nodes.
Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
Xmx of 8G.
No row cache.

  was:
RF of 3, 15 nodes.
Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 7).
Xmx of 8G.
No row cache.


 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey

 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 Column Family: 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Environment: 
RF of 3, 15 nodes.
Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 7).
Xmx of 8G.
No row cache.

  was:
RF of 3, 15 nodes.
Sun Java 7.
Xmx of 8G.
No row cache.


 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 7).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey

 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 Column Family: CommentTree
 SSTable count: 30
 

[jira] [Updated] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Harvey updated CASSANDRA-6405:


Environment: 
RF of 3, 15 nodes.
Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 7).
Xmx of 8G.
No row cache.

  was:
RF of 3, 15 nodes.
Sun Java 7.
Xmx of 8G.
No row cache.


 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 7).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey

 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 Column Family: CommentTree
 SSTable count: 30
 

[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-11-26 Thread Ngoc Minh Vo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832514#comment-13832514
 ] 

Ngoc Minh Vo commented on CASSANDRA-6008:
-

Just for your information, we've run into this issue with v2.0.2 on our dev 
environment and the workaround worked.

Hope the fix could be included in the next patch v2.0.3.
Thanks a lot for your help.

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Yuki Morishita
 Fix For: 2.0.3


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-11-26 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832570#comment-13832570
 ] 

Lyuben Todorov commented on CASSANDRA-5351:
---

After [this 
commit|https://github.com/lyubent/cassandra/commit/903e416539cdde78514850bda25076f3f2fc57ec]
 to keep unrepaired data at L0 repairs start failing to validate after a few 
inserts and compactions. The stack trace from the error in each node is below 
(3 node ccm cluster was used here with the repair being issued to node 2).

{code}
 INFO 15:19:10,321 Starting repair command #3, repairing 2 ranges for keyspace 
test
 INFO 15:19:10,322 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] new session: 
will sync /127.0.0.2, /127.0.0.3 on range 
(-9223372036854775808,-3074457345618258603] for test.[lvl]
 INFO 15:19:10,325 Handshaking version with /127.0.0.3
 INFO 15:19:10,343 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] requesting 
merkle trees for lvl (to [/127.0.0.3, /127.0.0.2])
 INFO 15:19:11,493 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] Received 
merkle tree for lvl from /127.0.0.3
ERROR 15:19:16,138 Failed creating a merkle tree for [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]], /127.0.0.2 (see log for details)
ERROR 15:19:16,138 Exception in thread Thread[ValidationExecutor:2,1,main]
java.lang.AssertionError: row DecoratedKey(-9223264645216044815, 
73636c744c546e56534c4741775141) received out of order wrt 
DecoratedKey(-3331959603918038206, 685863786a586464616b794f597075)
at org.apache.cassandra.repair.Validator.add(Validator.java:136)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:820)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:61)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:417)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
ERROR 15:19:16,139 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] session 
completed with the following error
org.apache.cassandra.exceptions.RepairException: [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:212)
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:91)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
 INFO 15:19:16,138 Range (3074457345618258602,-9223372036854775808] has already 
been repaired. Skipping repair.
S.AD: lvl repairedAt: 1385466263703792000
ERROR 15:19:16,139 Exception in thread Thread[AntiEntropySessions:5,5,RMI 
Runtime]
java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
[repair #55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.cassandra.exceptions.RepairException: [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:212)
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:91)
at 

[jira] [Comment Edited] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-11-26 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832570#comment-13832570
 ] 

Lyuben Todorov edited comment on CASSANDRA-5351 at 11/26/13 1:27 PM:
-

After [this 
commit|https://github.com/lyubent/cassandra/commit/903e416539cdde78514850bda25076f3f2fc57ec]
 to keep unrepaired data at L0 repairs start failing to validate after a few 
inserts and compactions. The stack trace from the error in each node is below 
(3 node ccm cluster was used here with the repair being issued to node 2).

{code}
 INFO 15:19:10,321 Starting repair command #3, repairing 2 ranges for keyspace 
test
 INFO 15:19:10,322 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] new session: 
will sync /127.0.0.2, /127.0.0.3 on range 
(-9223372036854775808,-3074457345618258603] for test.[lvl]
 INFO 15:19:10,325 Handshaking version with /127.0.0.3
 INFO 15:19:10,343 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] requesting 
merkle trees for lvl (to [/127.0.0.3, /127.0.0.2])
 INFO 15:19:11,493 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] Received 
merkle tree for lvl from /127.0.0.3
ERROR 15:19:16,138 Failed creating a merkle tree for [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]], /127.0.0.2 (see log for details)
ERROR 15:19:16,138 Exception in thread Thread[ValidationExecutor:2,1,main]
java.lang.AssertionError: row DecoratedKey(-9223264645216044815, 
73636c744c546e56534c4741775141) received out of order wrt 
DecoratedKey(-3331959603918038206, 685863786a586464616b794f597075)
at org.apache.cassandra.repair.Validator.add(Validator.java:136)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:820)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:61)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:417)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
ERROR 15:19:16,139 [repair #55f4d610-569d-11e3-b553-975f903ccf5a] session 
completed with the following error
org.apache.cassandra.exceptions.RepairException: [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:212)
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:91)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
 INFO 15:19:16,138 Range (3074457345618258602,-9223372036854775808] has already 
been repaired. Skipping repair.
S.AD: lvl repairedAt: 1385466263703792000
ERROR 15:19:16,139 Exception in thread Thread[AntiEntropySessions:5,5,RMI 
Runtime]
java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
[repair #55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.cassandra.exceptions.RepairException: [repair 
#55f4d610-569d-11e3-b553-975f903ccf5a on test/lvl, 
(-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:212)
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:91)
at 

[jira] [Updated] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6008:
--

 Reviewer: Jonathan Ellis
Fix Version/s: (was: 2.0.3)
   2.0.4
 Assignee: Tyler Hobbs  (was: Yuki Morishita)

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Tyler Hobbs
 Fix For: 2.0.4


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832641#comment-13832641
 ] 

Jonathan Ellis commented on CASSANDRA-5351:
---

There may be multiple problems here, but one is you need to update 
LeveledManifest.replace to keep unrepaired in L0.

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6401) Clustering order by property does not support when the column name is written in uppercase

2013-11-26 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6401.
--

Resolution: Invalid

 Clustering order by property does not support when the column name is 
 written in uppercase 
 ---

 Key: CASSANDRA-6401
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6401
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7  64 bit
Reporter: DIKSHA KUSHWAH

 Database Version : CASSANDRA 2.0.1
 1. Connect cassandra 2.0.1 and execute following query:-
CREATE TABLE test.cat  ( 
   cache text,
   num   int,
   PRIMARY KEY(cache,num)
 ) WITH
   CLUSTERING ORDER BY (num DESC);
 2. Now execute following query:-
 
 CREATE TABLE DOG  ( 
   CACHE text,
   NUM   int,
   PRIMARY KEY(CACHE,NUM)
 ) WITH
   CLUSTERING ORDER BY (NUM DESC);
 3. When we create table cat,it is successfully created but when we create 
 table DOG it gives error Missing CLUSTERING ORDER for column NUM because 
 Table DOG column name NUM is in upper case and Table CAT column name 
 num is in lower case.
 4. Cassandra supports when we create a Table with column names in upper case 
 without CLUSTERING ORDER BY  property then it should also support this 
 property with upper case column names.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6401) Clustering order by property does not support when the column name is written in uppercase

2013-11-26 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832746#comment-13832746
 ] 

Jeremiah Jordan commented on CASSANDRA-6401:


On case in CQL:
http://www.datastax.com/documentation/cql/3.1/webhelp/index.html#cql/cql_reference/cql_lexicon_c.html#reference_ds_b4h_gx5_yj

 Clustering order by property does not support when the column name is 
 written in uppercase 
 ---

 Key: CASSANDRA-6401
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6401
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7  64 bit
Reporter: DIKSHA KUSHWAH

 Database Version : CASSANDRA 2.0.1
 1. Connect cassandra 2.0.1 and execute following query:-
CREATE TABLE test.cat  ( 
   cache text,
   num   int,
   PRIMARY KEY(cache,num)
 ) WITH
   CLUSTERING ORDER BY (num DESC);
 2. Now execute following query:-
 
 CREATE TABLE DOG  ( 
   CACHE text,
   NUM   int,
   PRIMARY KEY(CACHE,NUM)
 ) WITH
   CLUSTERING ORDER BY (NUM DESC);
 3. When we create table cat,it is successfully created but when we create 
 table DOG it gives error Missing CLUSTERING ORDER for column NUM because 
 Table DOG column name NUM is in upper case and Table CAT column name 
 num is in lower case.
 4. Cassandra supports when we create a Table with column names in upper case 
 without CLUSTERING ORDER BY  property then it should also support this 
 property with upper case column names.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6345) Endpoint cache invalidation causes CPU spike (on vnode rings?)

2013-11-26 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832783#comment-13832783
 ] 

Rick Branson commented on CASSANDRA-6345:
-

Thanks for taking the time to explain the consistency story. It makes perfect 
sense. 

My defensiveness comment suggested bumping the version number (this is 
practically free) each time the TM write lock is released, which would be in 
addition to the existing invalidations. You're probably a much better gauge on 
the usefulness of this, so up to you.

Really nice that the v5 patch is so compact. Two minor comments: the 
endpointsLock declaration is still in there, and not to be all nitpicky but 
there are two typos in the comments (wo we keep and clone got invalidted).

 Endpoint cache invalidation causes CPU spike (on vnode rings?)
 --

 Key: CASSANDRA-6345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6345
 Project: Cassandra
  Issue Type: Bug
 Environment: 30 nodes total, 2 DCs
 Cassandra 1.2.11
 vnodes enabled (256 per node)
Reporter: Rick Branson
Assignee: Jonathan Ellis
 Fix For: 1.2.13

 Attachments: 6345-rbranson-v2.txt, 6345-rbranson.txt, 6345-v2.txt, 
 6345-v3.txt, 6345-v4.txt, 6345-v5.txt, 6345.txt, 
 half-way-thru-6345-rbranson-patch-applied.png


 We've observed that events which cause invalidation of the endpoint cache 
 (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy 
 result in several seconds of thundering herd behavior on the entire cluster. 
 A thread dump shows over a hundred threads (I stopped counting at that point) 
 with a backtrace like this:
 at java.net.Inet4Address.getAddress(Inet4Address.java:288)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
 at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
 at java.util.TreeMap.getEntry(TreeMap.java:322)
 at java.util.TreeMap.get(TreeMap.java:255)
 at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
 at 
 com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
 at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
 at 
 com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
 at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
 at 
 org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
 at 
 org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
 at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)
 It looks like there's a large amount of cost in the 
 TokenMetadata.cloneOnlyTokenMap that 
 AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is 
 a cache miss for an endpoint. It seems as if this would only impact clusters 
 with large numbers of tokens, so it's probably a vnodes-only issue.
 Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the 
 cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), 
 wrapping it with a lock to prevent stampedes, and clearing it in 
 clearEndpointCache(). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6345) Endpoint cache invalidation causes CPU spike (on vnode rings?)

2013-11-26 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832783#comment-13832783
 ] 

Rick Branson edited comment on CASSANDRA-6345 at 11/26/13 5:34 PM:
---

Thanks for taking the time to explain the consistency story. It makes perfect 
sense. 

My defensiveness comment suggested bumping the version number each time the TM 
write lock is released, which would be in addition to the existing 
invalidations. You're probably a much better gauge on the usefulness of this, 
so up to you.

Really nice that the v5 patch is so compact. Two minor comments: the 
endpointsLock declaration is still in there, and not to be all nitpicky but 
there are two typos in the comments (wo we keep and clone got invalidted).


was (Author: rbranson):
Thanks for taking the time to explain the consistency story. It makes perfect 
sense. 

My defensiveness comment suggested bumping the version number (this is 
practically free) each time the TM write lock is released, which would be in 
addition to the existing invalidations. You're probably a much better gauge on 
the usefulness of this, so up to you.

Really nice that the v5 patch is so compact. Two minor comments: the 
endpointsLock declaration is still in there, and not to be all nitpicky but 
there are two typos in the comments (wo we keep and clone got invalidted).

 Endpoint cache invalidation causes CPU spike (on vnode rings?)
 --

 Key: CASSANDRA-6345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6345
 Project: Cassandra
  Issue Type: Bug
 Environment: 30 nodes total, 2 DCs
 Cassandra 1.2.11
 vnodes enabled (256 per node)
Reporter: Rick Branson
Assignee: Jonathan Ellis
 Fix For: 1.2.13

 Attachments: 6345-rbranson-v2.txt, 6345-rbranson.txt, 6345-v2.txt, 
 6345-v3.txt, 6345-v4.txt, 6345-v5.txt, 6345.txt, 
 half-way-thru-6345-rbranson-patch-applied.png


 We've observed that events which cause invalidation of the endpoint cache 
 (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy 
 result in several seconds of thundering herd behavior on the entire cluster. 
 A thread dump shows over a hundred threads (I stopped counting at that point) 
 with a backtrace like this:
 at java.net.Inet4Address.getAddress(Inet4Address.java:288)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
 at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
 at java.util.TreeMap.getEntry(TreeMap.java:322)
 at java.util.TreeMap.get(TreeMap.java:255)
 at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
 at 
 com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
 at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
 at 
 com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
 at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
 at 
 org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
 at 
 org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
 at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)
 It looks like there's a large amount of cost in the 
 TokenMetadata.cloneOnlyTokenMap that 
 AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is 
 a cache miss for an endpoint. It seems as if this would only impact clusters 
 with large numbers of tokens, so it's probably a vnodes-only issue.
 Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the 
 cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), 
 wrapping it with a lock to prevent stampedes, and clearing it in 
 clearEndpointCache(). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5978) stressd broken by ClientEncriptionOptions

2013-11-26 Thread dan jatnieks (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832793#comment-13832793
 ] 

dan jatnieks commented on CASSANDRA-5978:
-

Ran into this today ... my stack has line numbers:

{noformat}
./dse-3.2.1/resources/cassandra/tools/bin/cassandra-stress -K 100 -t 50 -R 
org.apache.cassandra.locator.NetworkTopologyStrategy  --num-keys=1000 
--columns=50 -D nodelist -O Cassandra:3 --operation=INSERT --send-to 127.0.0.1

Exception in thread main java.io.NotSerializableException: 
org.apache.cassandra.config.EncryptionOptions$ClientEncryptionOptions
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1181)
at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1541)
at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1506)
at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1429)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
at org.apache.cassandra.stress.Stress.main(Unknown Source)
Control-C caught. Canceling running action and shutting down...
{noformat}


 stressd broken by ClientEncriptionOptions
 -

 Key: CASSANDRA-5978
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5978
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Priority: Minor

 The ClientEncryptionOptions object added to 
 org.apache.cassandra.stress.Session is not Serializable.  So if you try to 
 use stress with stressd, the Session can't be serialized to be passed over to 
 stressd:
 {noformat}
 Exception in thread main java.io.NotSerializableException: 
 org.apache.cassandra.config.EncryptionOptions$ClientEncryptionOptions
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source)
 at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
 at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.writeObject(Unknown Source)
 at org.apache.cassandra.stress.Stress.main(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832811#comment-13832811
 ] 

Jonathan Ellis commented on CASSANDRA-6311:
---

Summary of discussion in chat:

The user is responsible for providing a valid CQL statement, including token 
bind variables.

The IF API needs to change, probably to {{Long, Row}} where Row is a Java 
Driver Row 
(http://www.datastax.com/drivers/java/2.0/apidocs/com/datastax/driver/core/Row.html)
 and Long is a per-Task row ID.  (Precedent: DBInputFormat also uses a Long ID 
-- 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/lib/db/DBInputFormat.html.)

We can either use the metadata from the java driver to continue to estimate 
progress based on partitions, or switch to estimating progress by CQL row count 
if there is a way to get that from the server easily.

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.4

 Attachments: 6311-v3-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2013-11-26 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832835#comment-13832835
 ] 

Alex Liu commented on CASSANDRA-6311:
-

The expected user defined cql input must have the followings
{code}
 1) select clause must include partition key columns (to calculate the 
progress based on the actual CF row processed)
 2) where clause must include token(partition_key1 ... partition_keyn)  ? 
and 
 token(partition_key1 ... partition_keyn) = ?  (in the right order) 
{code}

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.4

 Attachments: 6311-v3-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2013-11-26 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832835#comment-13832835
 ] 

Alex Liu edited comment on CASSANDRA-6311 at 11/26/13 6:26 PM:
---

The expected user defined cql input must have the followings
{code}
 1) select clause must include partition key columns (to calculate the 
progress based on the actual CF row processed)
 2) where clause must include token(partition_key1, ...  ,partition_keyn)  
? and 
 token(partition_key1, ... , partition_keyn) = ?  (in the right order) 
{code}


was (Author: alexliu68):
The expected user defined cql input must have the followings
{code}
 1) select clause must include partition key columns (to calculate the 
progress based on the actual CF row processed)
 2) where clause must include token(partition_key1 ... partition_keyn)  ? 
and 
 token(partition_key1 ... partition_keyn) = ?  (in the right order) 
{code}

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.4

 Attachments: 6311-v3-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832843#comment-13832843
 ] 

Jonathan Ellis commented on CASSANDRA-6311:
---

# Only if we can't estimate row count in CQL rows
# Correct

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.4

 Attachments: 6311-v3-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-5978) stressd broken by ClientEncriptionOptions

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5978:
-

Assignee: Benedict

Fine with use stress-ng if that already supports daemon mode.  Otherwise this 
should be a quick fix.

 stressd broken by ClientEncriptionOptions
 -

 Key: CASSANDRA-5978
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5978
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Benedict
Priority: Minor

 The ClientEncryptionOptions object added to 
 org.apache.cassandra.stress.Session is not Serializable.  So if you try to 
 use stress with stressd, the Session can't be serialized to be passed over to 
 stressd:
 {noformat}
 Exception in thread main java.io.NotSerializableException: 
 org.apache.cassandra.config.EncryptionOptions$ClientEncryptionOptions
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source)
 at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
 at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.writeObject(Unknown Source)
 at org.apache.cassandra.stress.Stress.main(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6406) Nodetool cfstats doesn't handle index CFs

2013-11-26 Thread Mikhail Stepura (JIRA)
Mikhail Stepura created CASSANDRA-6406:
--

 Summary: Nodetool cfstats doesn't handle index CFs
 Key: CASSANDRA-6406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6406
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura


After CASSANDRA-5871 values cfstats are read from the metrics, the problem is 
that metrics for Index column families have different JMX type ( 
{{type=IndexColumnFamily}} vs {{type=ColumnFamily}} for regular ones)

{code}
$ bin/nodetool.bat cfstats stress
Starting NodeTool
Keyspace: stress
Exception in thread main java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy16.getCount(Unknown Source)
at 
org.apache.cassandra.tools.NodeCmd.printColumnFamilyStats(NodeCmd.java:829)
at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1123)
Caused by: javax.management.InstanceNotFoundException: 
org.apache.cassandra.metrics:type=ColumnFamily,keyspace=stress,scope=t1.t1_num_idx,name=WriteLatency
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at 
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at 
javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown Source)
at 
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
... 3 more
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6406) Nodetool cfstats doesn't handle index CFs

2013-11-26 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6406:
---

Priority: Minor  (was: Major)

 Nodetool cfstats doesn't handle index CFs
 -

 Key: CASSANDRA-6406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6406
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Priority: Minor

 After CASSANDRA-5871 values cfstats are read from the metrics, the problem is 
 that metrics for Index column families have different JMX type ( 
 {{type=IndexColumnFamily}} vs {{type=ColumnFamily}} for regular ones)
 {code}
 $ bin/nodetool.bat cfstats stress
 Starting NodeTool
 Keyspace: stress
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy16.getCount(Unknown Source)
 at 
 org.apache.cassandra.tools.NodeCmd.printColumnFamilyStats(NodeCmd.java:829)
 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1123)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ColumnFamily,keyspace=stress,scope=t1.t1_num_idx,name=WriteLatency
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 3 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5357) Query cache

2013-11-26 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832901#comment-13832901
 ] 

Rick Branson commented on CASSANDRA-5357:
-

Perhaps an anecdote from a production system might help find a simple, yet 
useful improvement to the row cache. Facebook's TAO distributed storage system 
supports a data model called assocs which are basically just graph edges, and 
nodes assigned to a given assoc ID hold a write-through cache of the state. The 
assoc storage can be roughly considered a more use-case specific CF. For large 
assocs with many thousands of edges, TAO only maintains the tail of the assoc 
in memory, as those tend to be the most interesting portions of data. More of 
the details are discussed in the linked paper[1].

Perhaps instead of a total overhaul, what's really needed to evolve the row 
cache by modifying it to only cache the head of the row and it's bounds. In 
contrast to the complexity of trying to match queries  mutations to a set of 
serialized query filter objects, the cache only needs to maintain one interval 
for each row at most. This would provide a very simple write-through story. 
After reviewing our production wide row use cases, they seem to fall into two 
camps. The first and most read-performance sensitive is vastly skewed towards 
reads on the head of the row (90% of the time) with a fixed limit. The second 
is randomly distributed slice queries which would not seem to provide a very 
good cache hit rate either way.

[1] https://www.usenix.org/conference/atc13/technical-sessions/papers/bronson)

 Query cache
 ---

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Vijay

 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6406) Nodetool cfstats doesn't handle index CFs

2013-11-26 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6406:
---

Attachment: Oracle_Java_Mission_Control_2013-11-26_11-15-02.png

 Nodetool cfstats doesn't handle index CFs
 -

 Key: CASSANDRA-6406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6406
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Priority: Minor
 Attachments: Oracle_Java_Mission_Control_2013-11-26_11-15-02.png


 After CASSANDRA-5871 values cfstats are read from the metrics, the problem is 
 that metrics for Index column families have different JMX type ( 
 {{type=IndexColumnFamily}} vs {{type=ColumnFamily}} for regular ones)
 {code}
 $ bin/nodetool.bat cfstats stress
 Starting NodeTool
 Keyspace: stress
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy16.getCount(Unknown Source)
 at 
 org.apache.cassandra.tools.NodeCmd.printColumnFamilyStats(NodeCmd.java:829)
 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1123)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ColumnFamily,keyspace=stress,scope=t1.t1_num_idx,name=WriteLatency
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 3 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6231) Add snapshot disk space to cfstats

2013-11-26 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832916#comment-13832916
 ] 

Nick Bailey commented on CASSANDRA-6231:


I'd just like to make a note that we've (opscenter) seen issues with inspecting 
snapshots for LCS column families. LCS can create a very large number of 
sstable files (anywhere from 10k to 100k+ range) , and just storing strings for 
all the file names was giving us some issues. In our case we were dealing with 
much smaller heap sizes though. Also this is already handling duplicate 
sstables, but it doesn't sound unreasonable that the number of distinct files 
could get extremely large, even by just taking daily snapshots.

 Add snapshot disk space to cfstats
 --

 Key: CASSANDRA-6231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6231
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
  Labels: lhf
 Attachments: CASSANDRA-2.0-6231-v2.patch, 
 CASSANDRA-2.0-6231-v3.patch, CASSANDRA-2.0-6231.patch


 As discussed in CASSANDRA-6179, this could help avoid some user confusion, 
 especially when snapshots are autocreated for drop/truncate.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6231) Add snapshot disk space to cfstats

2013-11-26 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832916#comment-13832916
 ] 

Nick Bailey edited comment on CASSANDRA-6231 at 11/26/13 7:22 PM:
--

I'd just like to make a note that we've (opscenter) have seen issues with 
inspecting snapshots for LCS column families. LCS can create a very large 
number of sstable files (anywhere from 10k to 100k+ range) , and just storing 
strings for all the file names was giving us some issues. In our case we were 
dealing with much smaller heap sizes though. Also this is already handling 
duplicate sstables, but it doesn't sound unreasonable that the number of 
distinct files could get extremely large, even by just taking daily snapshots.


was (Author: nickmbailey):
I'd just like to make a note that we've (opscenter) seen issues with inspecting 
snapshots for LCS column families. LCS can create a very large number of 
sstable files (anywhere from 10k to 100k+ range) , and just storing strings for 
all the file names was giving us some issues. In our case we were dealing with 
much smaller heap sizes though. Also this is already handling duplicate 
sstables, but it doesn't sound unreasonable that the number of distinct files 
could get extremely large, even by just taking daily snapshots.

 Add snapshot disk space to cfstats
 --

 Key: CASSANDRA-6231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6231
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
  Labels: lhf
 Attachments: CASSANDRA-2.0-6231-v2.patch, 
 CASSANDRA-2.0-6231-v3.patch, CASSANDRA-2.0-6231.patch


 As discussed in CASSANDRA-6179, this could help avoid some user confusion, 
 especially when snapshots are autocreated for drop/truncate.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6231) Add snapshot disk space to cfstats

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832937#comment-13832937
 ] 

Jonathan Ellis commented on CASSANDRA-6231:
---

100k distinct filenames would be about 50MB of heap at 500 bytes each, and 16TB 
worth of data on disk.  I'm okay with those numbers.  (If you're still using 
5MB sstables you should probably fix that before calling this, but you probably 
already have more important reasons to fix that.)

 Add snapshot disk space to cfstats
 --

 Key: CASSANDRA-6231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6231
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
  Labels: lhf
 Attachments: CASSANDRA-2.0-6231-v2.patch, 
 CASSANDRA-2.0-6231-v3.patch, CASSANDRA-2.0-6231.patch


 As discussed in CASSANDRA-6179, this could help avoid some user confusion, 
 especially when snapshots are autocreated for drop/truncate.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6406) Nodetool cfstats doesn't handle index CFs

2013-11-26 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6406:
---

Attachment: trunk-6406.patch

Patch to use a different JMX type for index CFs

 Nodetool cfstats doesn't handle index CFs
 -

 Key: CASSANDRA-6406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6406
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Priority: Minor
 Attachments: Oracle_Java_Mission_Control_2013-11-26_11-15-02.png, 
 trunk-6406.patch


 After CASSANDRA-5871 values cfstats are read from the metrics, the problem is 
 that metrics for Index column families have different JMX type ( 
 {{type=IndexColumnFamily}} vs {{type=ColumnFamily}} for regular ones)
 {code}
 $ bin/nodetool.bat cfstats stress
 Starting NodeTool
 Keyspace: stress
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy16.getCount(Unknown Source)
 at 
 org.apache.cassandra.tools.NodeCmd.printColumnFamilyStats(NodeCmd.java:829)
 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1123)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ColumnFamily,keyspace=stress,scope=t1.t1_num_idx,name=WriteLatency
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 3 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6406) Nodetool cfstats doesn't handle index CFs

2013-11-26 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6406:
---

Reviewer: Lyuben Todorov

 Nodetool cfstats doesn't handle index CFs
 -

 Key: CASSANDRA-6406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6406
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Assignee: Mikhail Stepura
Priority: Minor
 Attachments: Oracle_Java_Mission_Control_2013-11-26_11-15-02.png, 
 trunk-6406.patch


 After CASSANDRA-5871 values cfstats are read from the metrics, the problem is 
 that metrics for Index column families have different JMX type ( 
 {{type=IndexColumnFamily}} vs {{type=ColumnFamily}} for regular ones)
 {code}
 $ bin/nodetool.bat cfstats stress
 Starting NodeTool
 Keyspace: stress
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy16.getCount(Unknown Source)
 at 
 org.apache.cassandra.tools.NodeCmd.printColumnFamilyStats(NodeCmd.java:829)
 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1123)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ColumnFamily,keyspace=stress,scope=t1.t1_num_idx,name=WriteLatency
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 3 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5978) stressd broken by ClientEncriptionOptions

2013-11-26 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832951#comment-13832951
 ] 

Benedict commented on CASSANDRA-5978:
-

Download the latest snapshot from 
[6199|https://github.com/belliottsmith/cassandra/tree/iss-6199-stress] and run 
your command as

./dse-3.2.1/resources/cassandra/tools/bin/cassandra-stress *legacy* -K 100 -t 
50 -R org.apache.cassandra.locator.NetworkTopologyStrategy  --num-keys=1000 
--columns=50 -D nodelist -O Cassandra:3 --operation=INSERT --send-to 127.0.0.1

I've confirmed this command and stressd launcher etc work with the latest 
version.

 stressd broken by ClientEncriptionOptions
 -

 Key: CASSANDRA-5978
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5978
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Benedict
Priority: Minor

 The ClientEncryptionOptions object added to 
 org.apache.cassandra.stress.Session is not Serializable.  So if you try to 
 use stress with stressd, the Session can't be serialized to be passed over to 
 stressd:
 {noformat}
 Exception in thread main java.io.NotSerializableException: 
 org.apache.cassandra.config.EncryptionOptions$ClientEncryptionOptions
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source)
 at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
 at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
 at java.io.ObjectOutputStream.writeObject0(Unknown Source)
 at java.io.ObjectOutputStream.writeObject(Unknown Source)
 at org.apache.cassandra.stress.Stress.main(Unknown Source)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6407) CQLSH hangs forever when querying more than certain amount of data

2013-11-26 Thread Nikolai Grigoriev (JIRA)
Nikolai Grigoriev created CASSANDRA-6407:


 Summary: CQLSH hangs forever when querying more than certain 
amount of data
 Key: CASSANDRA-6407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6407
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Oracle Linux 6.4, JDK 1.7.0_25-b15, Cassandra 2.0.2
Reporter: Nikolai Grigoriev


I have a table like this (slightly simplified for clarity):

{code}
CREATE TABLE my_test_table (
uid  uuid,
d_id uuid,
a_id uuid,  
c_idtext,
i_idblob,   
datatext,
PRIMARY KEY ((uid, d_id, a_id), c_id, i_id)
);
{code}

I have created about over a hundred (117 to be specific) of sample entities 
with the same row key and different clustering keys. Each has a blob of 
approximately 4Kb.

I have tried to fetch all of them with a query like this via CQLSH:

{code}
select * from my_test_table where uid=44338526-7aac-4640-bcde-0f4663c07572 and 
a_id=--4000--0002 and 
d_id=--1e64--0001 and c_id='list-2'
{code}

This query simply hangs in CQLSH, it does not return at all until I abort it.

Then I started playing with LIMIT clause and found that this query returns 
instantly (with good data) when I use LIMIT 55 but hangs forever when I use 
LIMIT 56.

Then I tried to just query all i_id values like this:

{code}
select i_id from my_test_table where uid=44338526-7aac-4640-bcde-0f4663c07572 
and a_id=--4000--0002 and 
d_id=--1e64--0001 and c_id='list-2'
{code}

And this query returns instantly with the complete set of 117 values. So I 
started thinking that it must be something about the total size of the 
response, not the number of results or the number of columns to be fetches in 
slices. And I have tried another test:

{code}
select cdata from my_test_table where uid=44338526-7aac-4640-bcde-0f4663c07572 
and a_id=--4000--0002 and 
d_id=--1e64--0001 and c_id='list-2' LIMIT 63
{code}

This query returns instantly but if I change the limit to 64 it hangs forever. 
Since my blob is about 4Kb for each entity it *seems* like the query hangs when 
the total size of the response exceeds 252..256Kb. Looks quite suspicious 
especially because 256Kb is such a particular number. I am wondering if this 
has something to do with the result paging.

I did not test if the issue is reproducible outside of CQLSH but I do recall 
that I observed somewhat similar behavior when fetching relatively large data 
sets.

I can consistently reproduce this problem on my cluster. I am also attaching 
the jstack output that I have captured when CQLSH was hanging on one of these 
queries.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6407) CQLSH hangs forever when querying more than certain amount of data

2013-11-26 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-6407:
-

Attachment: cassandra.jstack.gz

jstack output for Cassandra server process on the host where I run CQLSH

 CQLSH hangs forever when querying more than certain amount of data
 --

 Key: CASSANDRA-6407
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6407
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Oracle Linux 6.4, JDK 1.7.0_25-b15, Cassandra 2.0.2
Reporter: Nikolai Grigoriev
 Attachments: cassandra.jstack.gz


 I have a table like this (slightly simplified for clarity):
 {code}
 CREATE TABLE my_test_table (
   uid  uuid,
   d_id uuid,
   a_id uuid,  
   c_idtext,
   i_idblob,   
   datatext,
   PRIMARY KEY ((uid, d_id, a_id), c_id, i_id)
 );
 {code}
 I have created about over a hundred (117 to be specific) of sample entities 
 with the same row key and different clustering keys. Each has a blob of 
 approximately 4Kb.
 I have tried to fetch all of them with a query like this via CQLSH:
 {code}
 select * from my_test_table where uid=44338526-7aac-4640-bcde-0f4663c07572 
 and a_id=--4000--0002 and 
 d_id=--1e64--0001 and c_id='list-2'
 {code}
 This query simply hangs in CQLSH, it does not return at all until I abort it.
 Then I started playing with LIMIT clause and found that this query returns 
 instantly (with good data) when I use LIMIT 55 but hangs forever when I use 
 LIMIT 56.
 Then I tried to just query all i_id values like this:
 {code}
 select i_id from my_test_table where uid=44338526-7aac-4640-bcde-0f4663c07572 
 and a_id=--4000--0002 and 
 d_id=--1e64--0001 and c_id='list-2'
 {code}
 And this query returns instantly with the complete set of 117 values. So I 
 started thinking that it must be something about the total size of the 
 response, not the number of results or the number of columns to be fetches in 
 slices. And I have tried another test:
 {code}
 select cdata from my_test_table where 
 uid=44338526-7aac-4640-bcde-0f4663c07572 and 
 a_id=--4000--0002 and 
 d_id=--1e64--0001 and c_id='list-2' LIMIT 63
 {code}
 This query returns instantly but if I change the limit to 64 it hangs 
 forever. Since my blob is about 4Kb for each entity it *seems* like the query 
 hangs when the total size of the response exceeds 252..256Kb. Looks quite 
 suspicious especially because 256Kb is such a particular number. I am 
 wondering if this has something to do with the result paging.
 I did not test if the issue is reproducible outside of CQLSH but I do recall 
 that I observed somewhat similar behavior when fetching relatively large data 
 sets.
 I can consistently reproduce this problem on my cluster. I am also attaching 
 the jstack output that I have captured when CQLSH was hanging on one of these 
 queries.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6231) Add snapshot disk space to cfstats

2013-11-26 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6231:
---

Attachment: trunk-6231-v3.patch

V3 of the patch with changes for trunk. (Use o.a.c.metrcis)

 Add snapshot disk space to cfstats
 --

 Key: CASSANDRA-6231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6231
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
  Labels: lhf
 Attachments: CASSANDRA-2.0-6231-v2.patch, 
 CASSANDRA-2.0-6231-v3.patch, CASSANDRA-2.0-6231.patch, trunk-6231-v3.patch


 As discussed in CASSANDRA-6179, this could help avoid some user confusion, 
 especially when snapshots are autocreated for drop/truncate.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[5/6] git commit: merge from 1.2

2013-11-26 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/504f66dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/504f66dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/504f66dc

Branch: refs/heads/cassandra-2.0
Commit: 504f66dc148ab4277756f7d7ca34d760d6f4a179
Parents: 6c68b30 8145c83
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:13:02 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:13:02 2013 -0600

--
 CHANGES.txt |  1 +
 .../locator/AbstractReplicationStrategy.java| 58 +---
 .../apache/cassandra/locator/TokenMetadata.java | 49 +++--
 3 files changed, 60 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/CHANGES.txt
--
diff --cc CHANGES.txt
index 34dc7a5,8d443f9..d52c508
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,8 +1,9 @@@
 -1.2.13
 +2.0.4
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
 +Merged from 1.2:
+  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
 - * Optimize FD phi calculation (CASSANDRA-6386)
 - * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 - * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
   * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --cc src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index f83c889,51c4119..69c133b
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@@ -54,19 -56,27 +56,27 @@@ public abstract class AbstractReplicati
  public final MapString, String configOptions;
  private final TokenMetadata tokenMetadata;
  
+ // We want to make updating our replicas asynchronous vs the master 
TokenMetadata instance,
+ // so that our ownership calculations never block Gossip from processing 
an ownership change.
+ // But, we also can't afford to re-clone TM for each range after cache 
invalidation (CASSANDRA-6345),
+ // so we keep our own copy here.
+ //
+ // Writes to tokenMetadataClone should be synchronized.
+ private volatile TokenMetadata tokenMetadataClone = null;
+ private volatile long clonedTokenMetadataVersion = 0;
+ 
  public IEndpointSnitch snitch;
  
 -AbstractReplicationStrategy(String tableName, TokenMetadata 
tokenMetadata, IEndpointSnitch snitch, MapString, String configOptions)
 +AbstractReplicationStrategy(String keyspaceName, TokenMetadata 
tokenMetadata, IEndpointSnitch snitch, MapString, String configOptions)
  {
 -assert tableName != null;
 +assert keyspaceName != null;
  assert snitch != null;
  assert tokenMetadata != null;
  this.tokenMetadata = tokenMetadata;
  this.snitch = snitch;
- this.tokenMetadata.register(this);
  this.configOptions = configOptions == null ? Collections.String, 
StringemptyMap() : configOptions;
 -this.tableName = tableName;
 -// lazy-initialize table itself since we don't create them until 
after the replication strategies
 +this.keyspaceName = keyspaceName;
 +// lazy-initialize keyspace itself since we don't create them until 
after the replication strategies
  }
  
  private final MapToken, ArrayListInetAddress cachedEndpoints = new 
NonBlockingHashMapToken, ArrayListInetAddress();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --cc src/java/org/apache/cassandra/locator/TokenMetadata.java
index 7f794ea,818ca8f..b20be18
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@@ -27,11 -26,7 +26,7 @@@ import java.util.concurrent.locks.ReadW
  import java.util.concurrent.locks.ReentrantReadWriteLock;
  
  import com.google.common.collect.*;
- 
- import org.apache.cassandra.utils.BiMultiValMap;
- import org.apache.cassandra.utils.Pair;
- import org.apache.cassandra.utils.SortedBiMultiValMap;
 -import org.apache.commons.lang.StringUtils;
 +import 

[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-26 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bfd062f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bfd062f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bfd062f

Branch: refs/heads/trunk
Commit: 1bfd062fdc9daa35fbabcebb3ac31e726504f1ff
Parents: 9f3a7f8 504f66d
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:13:06 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:13:06 2013 -0600

--
 CHANGES.txt |  1 +
 .../locator/AbstractReplicationStrategy.java| 58 +---
 .../apache/cassandra/locator/TokenMetadata.java | 49 +++--
 3 files changed, 60 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bfd062f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bfd062f/src/java/org/apache/cassandra/locator/TokenMetadata.java
--



[1/6] git commit: Fix thundering herd on endpoint cache invalidation patch by rbranson and jbellis for CASSANDRA-6345

2013-11-26 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 fce173532 - 8145c8356
  refs/heads/cassandra-2.0 6c68b30fe - 504f66dc1
  refs/heads/trunk 9f3a7f8a6 - 1bfd062fd


Fix thundering herd on endpoint cache invalidation
patch by rbranson and jbellis for CASSANDRA-6345


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8145c835
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8145c835
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8145c835

Branch: refs/heads/cassandra-1.2
Commit: 8145c83566450feb68a12352ac88efe9983ec266
Parents: fce1735
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:09:56 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:09:56 2013 -0600

--
 CHANGES.txt |  1 +
 .../locator/AbstractReplicationStrategy.java| 58 +---
 .../apache/cassandra/locator/TokenMetadata.java | 47 +++-
 3 files changed, 59 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8145c835/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 57c1896..8d443f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.13
+ * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * Optimize FD phi calculation (CASSANDRA-6386)
  * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
  * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8145c835/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index e17b0b4..51c4119 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -20,10 +20,12 @@ package org.apache.cassandra.locator;
 import java.lang.reflect.Constructor;
 import java.net.InetAddress;
 import java.util.*;
+import java.util.concurrent.locks.Lock;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Multimap;
+import com.google.common.util.concurrent.Striped;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -54,6 +56,15 @@ public abstract class AbstractReplicationStrategy
 public final MapString, String configOptions;
 private final TokenMetadata tokenMetadata;
 
+// We want to make updating our replicas asynchronous vs the master 
TokenMetadata instance,
+// so that our ownership calculations never block Gossip from processing 
an ownership change.
+// But, we also can't afford to re-clone TM for each range after cache 
invalidation (CASSANDRA-6345),
+// so we keep our own copy here.
+//
+// Writes to tokenMetadataClone should be synchronized.
+private volatile TokenMetadata tokenMetadataClone = null;
+private volatile long clonedTokenMetadataVersion = 0;
+
 public IEndpointSnitch snitch;
 
 AbstractReplicationStrategy(String tableName, TokenMetadata tokenMetadata, 
IEndpointSnitch snitch, MapString, String configOptions)
@@ -63,7 +74,6 @@ public abstract class AbstractReplicationStrategy
 assert tokenMetadata != null;
 this.tokenMetadata = tokenMetadata;
 this.snitch = snitch;
-this.tokenMetadata.register(this);
 this.configOptions = configOptions == null ? Collections.String, 
StringemptyMap() : configOptions;
 this.tableName = tableName;
 // lazy-initialize table itself since we don't create them until after 
the replication strategies
@@ -73,18 +83,23 @@ public abstract class AbstractReplicationStrategy
 
 public ArrayListInetAddress getCachedEndpoints(Token t)
 {
-return cachedEndpoints.get(t);
-}
+long lastVersion = tokenMetadata.getRingVersion();
 
-public void cacheEndpoint(Token t, ArrayListInetAddress addr)
-{
-cachedEndpoints.put(t, addr);
-}
+if (lastVersion  clonedTokenMetadataVersion)
+{
+synchronized (this)
+{
+if (lastVersion  clonedTokenMetadataVersion)
+{
+logger.debug(clearing cached endpoints);
+tokenMetadataClone = null;
+cachedEndpoints.clear();
+clonedTokenMetadataVersion = lastVersion;
+}
+}
+}
 
-public void clearEndpointCache()
-{
-   

[4/6] git commit: merge from 1.2

2013-11-26 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/504f66dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/504f66dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/504f66dc

Branch: refs/heads/trunk
Commit: 504f66dc148ab4277756f7d7ca34d760d6f4a179
Parents: 6c68b30 8145c83
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:13:02 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:13:02 2013 -0600

--
 CHANGES.txt |  1 +
 .../locator/AbstractReplicationStrategy.java| 58 +---
 .../apache/cassandra/locator/TokenMetadata.java | 49 +++--
 3 files changed, 60 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/CHANGES.txt
--
diff --cc CHANGES.txt
index 34dc7a5,8d443f9..d52c508
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,8 +1,9 @@@
 -1.2.13
 +2.0.4
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
 +Merged from 1.2:
+  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
 - * Optimize FD phi calculation (CASSANDRA-6386)
 - * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 - * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
   * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --cc src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index f83c889,51c4119..69c133b
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@@ -54,19 -56,27 +56,27 @@@ public abstract class AbstractReplicati
  public final MapString, String configOptions;
  private final TokenMetadata tokenMetadata;
  
+ // We want to make updating our replicas asynchronous vs the master 
TokenMetadata instance,
+ // so that our ownership calculations never block Gossip from processing 
an ownership change.
+ // But, we also can't afford to re-clone TM for each range after cache 
invalidation (CASSANDRA-6345),
+ // so we keep our own copy here.
+ //
+ // Writes to tokenMetadataClone should be synchronized.
+ private volatile TokenMetadata tokenMetadataClone = null;
+ private volatile long clonedTokenMetadataVersion = 0;
+ 
  public IEndpointSnitch snitch;
  
 -AbstractReplicationStrategy(String tableName, TokenMetadata 
tokenMetadata, IEndpointSnitch snitch, MapString, String configOptions)
 +AbstractReplicationStrategy(String keyspaceName, TokenMetadata 
tokenMetadata, IEndpointSnitch snitch, MapString, String configOptions)
  {
 -assert tableName != null;
 +assert keyspaceName != null;
  assert snitch != null;
  assert tokenMetadata != null;
  this.tokenMetadata = tokenMetadata;
  this.snitch = snitch;
- this.tokenMetadata.register(this);
  this.configOptions = configOptions == null ? Collections.String, 
StringemptyMap() : configOptions;
 -this.tableName = tableName;
 -// lazy-initialize table itself since we don't create them until 
after the replication strategies
 +this.keyspaceName = keyspaceName;
 +// lazy-initialize keyspace itself since we don't create them until 
after the replication strategies
  }
  
  private final MapToken, ArrayListInetAddress cachedEndpoints = new 
NonBlockingHashMapToken, ArrayListInetAddress();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/504f66dc/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --cc src/java/org/apache/cassandra/locator/TokenMetadata.java
index 7f794ea,818ca8f..b20be18
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@@ -27,11 -26,7 +26,7 @@@ import java.util.concurrent.locks.ReadW
  import java.util.concurrent.locks.ReentrantReadWriteLock;
  
  import com.google.common.collect.*;
- 
- import org.apache.cassandra.utils.BiMultiValMap;
- import org.apache.cassandra.utils.Pair;
- import org.apache.cassandra.utils.SortedBiMultiValMap;
 -import org.apache.commons.lang.StringUtils;
 +import org.apache.commons.lang3.StringUtils;
  

[3/6] git commit: Fix thundering herd on endpoint cache invalidation patch by rbranson and jbellis for CASSANDRA-6345

2013-11-26 Thread jbellis
Fix thundering herd on endpoint cache invalidation
patch by rbranson and jbellis for CASSANDRA-6345


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8145c835
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8145c835
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8145c835

Branch: refs/heads/trunk
Commit: 8145c83566450feb68a12352ac88efe9983ec266
Parents: fce1735
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:09:56 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:09:56 2013 -0600

--
 CHANGES.txt |  1 +
 .../locator/AbstractReplicationStrategy.java| 58 +---
 .../apache/cassandra/locator/TokenMetadata.java | 47 +++-
 3 files changed, 59 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8145c835/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 57c1896..8d443f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.13
+ * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * Optimize FD phi calculation (CASSANDRA-6386)
  * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
  * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8145c835/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index e17b0b4..51c4119 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -20,10 +20,12 @@ package org.apache.cassandra.locator;
 import java.lang.reflect.Constructor;
 import java.net.InetAddress;
 import java.util.*;
+import java.util.concurrent.locks.Lock;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Multimap;
+import com.google.common.util.concurrent.Striped;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -54,6 +56,15 @@ public abstract class AbstractReplicationStrategy
 public final MapString, String configOptions;
 private final TokenMetadata tokenMetadata;
 
+// We want to make updating our replicas asynchronous vs the master 
TokenMetadata instance,
+// so that our ownership calculations never block Gossip from processing 
an ownership change.
+// But, we also can't afford to re-clone TM for each range after cache 
invalidation (CASSANDRA-6345),
+// so we keep our own copy here.
+//
+// Writes to tokenMetadataClone should be synchronized.
+private volatile TokenMetadata tokenMetadataClone = null;
+private volatile long clonedTokenMetadataVersion = 0;
+
 public IEndpointSnitch snitch;
 
 AbstractReplicationStrategy(String tableName, TokenMetadata tokenMetadata, 
IEndpointSnitch snitch, MapString, String configOptions)
@@ -63,7 +74,6 @@ public abstract class AbstractReplicationStrategy
 assert tokenMetadata != null;
 this.tokenMetadata = tokenMetadata;
 this.snitch = snitch;
-this.tokenMetadata.register(this);
 this.configOptions = configOptions == null ? Collections.String, 
StringemptyMap() : configOptions;
 this.tableName = tableName;
 // lazy-initialize table itself since we don't create them until after 
the replication strategies
@@ -73,18 +83,23 @@ public abstract class AbstractReplicationStrategy
 
 public ArrayListInetAddress getCachedEndpoints(Token t)
 {
-return cachedEndpoints.get(t);
-}
+long lastVersion = tokenMetadata.getRingVersion();
 
-public void cacheEndpoint(Token t, ArrayListInetAddress addr)
-{
-cachedEndpoints.put(t, addr);
-}
+if (lastVersion  clonedTokenMetadataVersion)
+{
+synchronized (this)
+{
+if (lastVersion  clonedTokenMetadataVersion)
+{
+logger.debug(clearing cached endpoints);
+tokenMetadataClone = null;
+cachedEndpoints.clear();
+clonedTokenMetadataVersion = lastVersion;
+}
+}
+}
 
-public void clearEndpointCache()
-{
-logger.debug(clearing cached endpoints);
-cachedEndpoints.clear();
+return cachedEndpoints.get(t);
 }
 
 /**
@@ -101,10 +116,20 @@ public 

[3/6] git commit: fix build

2013-11-26 Thread jbellis
fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc8a05ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc8a05ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc8a05ab

Branch: refs/heads/trunk
Commit: cc8a05ab6ac22f019e60ec79c11338d4c77d49c3
Parents: 8145c83
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:27:52 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:27:52 2013 -0600

--
 src/java/org/apache/cassandra/db/Table.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc8a05ab/src/java/org/apache/cassandra/db/Table.java
--
diff --git a/src/java/org/apache/cassandra/db/Table.java 
b/src/java/org/apache/cassandra/db/Table.java
index a851eee..e6df982 100644
--- a/src/java/org/apache/cassandra/db/Table.java
+++ b/src/java/org/apache/cassandra/db/Table.java
@@ -275,9 +275,6 @@ public class Table
 
 public void createReplicationStrategy(KSMetaData ksm)
 {
-if (replicationStrategy != null)
-
StorageService.instance.getTokenMetadata().unregister(replicationStrategy);
-
 replicationStrategy = 
AbstractReplicationStrategy.createReplicationStrategy(ksm.name,

 ksm.strategyClass,

 StorageService.instance.getTokenMetadata(),



[1/6] git commit: fix build

2013-11-26 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 8145c8356 - cc8a05ab6
  refs/heads/cassandra-2.0 504f66dc1 - e68d466eb
  refs/heads/trunk 1bfd062fd - c384d31b8


fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc8a05ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc8a05ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc8a05ab

Branch: refs/heads/cassandra-1.2
Commit: cc8a05ab6ac22f019e60ec79c11338d4c77d49c3
Parents: 8145c83
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:27:52 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:27:52 2013 -0600

--
 src/java/org/apache/cassandra/db/Table.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc8a05ab/src/java/org/apache/cassandra/db/Table.java
--
diff --git a/src/java/org/apache/cassandra/db/Table.java 
b/src/java/org/apache/cassandra/db/Table.java
index a851eee..e6df982 100644
--- a/src/java/org/apache/cassandra/db/Table.java
+++ b/src/java/org/apache/cassandra/db/Table.java
@@ -275,9 +275,6 @@ public class Table
 
 public void createReplicationStrategy(KSMetaData ksm)
 {
-if (replicationStrategy != null)
-
StorageService.instance.getTokenMetadata().unregister(replicationStrategy);
-
 replicationStrategy = 
AbstractReplicationStrategy.createReplicationStrategy(ksm.name,

 ksm.strategyClass,

 StorageService.instance.getTokenMetadata(),



[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-26 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c384d31b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c384d31b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c384d31b

Branch: refs/heads/trunk
Commit: c384d31b8b6289f896625a9fc5777bac62ebbbe8
Parents: 1bfd062 e68d466
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:28:16 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:28:16 2013 -0600

--
 src/java/org/apache/cassandra/db/Keyspace.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c384d31b/src/java/org/apache/cassandra/db/Keyspace.java
--



[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-26 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e68d466e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e68d466e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e68d466e

Branch: refs/heads/cassandra-2.0
Commit: e68d466eb226134a73469648af5085da43669fd8
Parents: 504f66d cc8a05a
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:27:58 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:27:58 2013 -0600

--
 src/java/org/apache/cassandra/db/Keyspace.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e68d466e/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --cc src/java/org/apache/cassandra/db/Keyspace.java
index 4914c11,000..0280ed2
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@@ -1,454 -1,0 +1,451 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.File;
 +import java.io.IOException;
 +import java.util.*;
 +import java.util.concurrent.ConcurrentHashMap;
 +import java.util.concurrent.ConcurrentMap;
 +import java.util.concurrent.Future;
 +import java.util.concurrent.locks.ReentrantReadWriteLock;
 +
 +import com.google.common.base.Function;
 +import com.google.common.collect.Iterables;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.config.KSMetaData;
 +import org.apache.cassandra.config.Schema;
 +import org.apache.cassandra.db.commitlog.CommitLog;
 +import org.apache.cassandra.db.filter.QueryFilter;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.db.index.SecondaryIndexManager;
 +import org.apache.cassandra.io.sstable.SSTableReader;
 +import org.apache.cassandra.locator.AbstractReplicationStrategy;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.service.pager.QueryPagers;
 +import org.apache.cassandra.tracing.Tracing;
 +
 +/**
 + * It represents a Keyspace.
 + */
 +public class Keyspace
 +{
 +public static final String SYSTEM_KS = system;
 +private static final int DEFAULT_PAGE_SIZE = 1;
 +
 +private static final Logger logger = 
LoggerFactory.getLogger(Keyspace.class);
 +
 +/**
 + * accesses to CFS.memtable should acquire this for thread safety.
 + * CFS.maybeSwitchMemtable should aquire the writeLock; see that method 
for the full explanation.
 + * p/
 + * (Enabling fairness in the RRWL is observed to decrease throughput, so 
we leave it off.)
 + */
 +public static final ReentrantReadWriteLock switchLock = new 
ReentrantReadWriteLock();
 +
 +// It is possible to call Keyspace.open without a running daemon, so it 
makes sense to ensure
 +// proper directories here as well as in CassandraDaemon.
 +static
 +{
 +if (!StorageService.instance.isClientMode())
 +DatabaseDescriptor.createAllDirectories();
 +}
 +
 +public final KSMetaData metadata;
 +
 +/* ColumnFamilyStore per column family */
 +private final ConcurrentMapUUID, ColumnFamilyStore columnFamilyStores = 
new ConcurrentHashMapUUID, ColumnFamilyStore();
 +private volatile AbstractReplicationStrategy replicationStrategy;
 +public static final FunctionString,Keyspace keyspaceTransformer = new 
FunctionString, Keyspace()
 +{
 +public Keyspace apply(String keyspaceName)
 +{
 +return Keyspace.open(keyspaceName);
 +}
 +};
 +
 +public static Keyspace open(String keyspaceName)
 +{
 +return open(keyspaceName, Schema.instance, true);
 +}
 +
 +public static Keyspace openWithoutSSTables(String keyspaceName)

[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-26 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e68d466e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e68d466e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e68d466e

Branch: refs/heads/trunk
Commit: e68d466eb226134a73469648af5085da43669fd8
Parents: 504f66d cc8a05a
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:27:58 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:27:58 2013 -0600

--
 src/java/org/apache/cassandra/db/Keyspace.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e68d466e/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --cc src/java/org/apache/cassandra/db/Keyspace.java
index 4914c11,000..0280ed2
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@@ -1,454 -1,0 +1,451 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.File;
 +import java.io.IOException;
 +import java.util.*;
 +import java.util.concurrent.ConcurrentHashMap;
 +import java.util.concurrent.ConcurrentMap;
 +import java.util.concurrent.Future;
 +import java.util.concurrent.locks.ReentrantReadWriteLock;
 +
 +import com.google.common.base.Function;
 +import com.google.common.collect.Iterables;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.config.KSMetaData;
 +import org.apache.cassandra.config.Schema;
 +import org.apache.cassandra.db.commitlog.CommitLog;
 +import org.apache.cassandra.db.filter.QueryFilter;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.db.index.SecondaryIndexManager;
 +import org.apache.cassandra.io.sstable.SSTableReader;
 +import org.apache.cassandra.locator.AbstractReplicationStrategy;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.service.pager.QueryPagers;
 +import org.apache.cassandra.tracing.Tracing;
 +
 +/**
 + * It represents a Keyspace.
 + */
 +public class Keyspace
 +{
 +public static final String SYSTEM_KS = system;
 +private static final int DEFAULT_PAGE_SIZE = 1;
 +
 +private static final Logger logger = 
LoggerFactory.getLogger(Keyspace.class);
 +
 +/**
 + * accesses to CFS.memtable should acquire this for thread safety.
 + * CFS.maybeSwitchMemtable should aquire the writeLock; see that method 
for the full explanation.
 + * p/
 + * (Enabling fairness in the RRWL is observed to decrease throughput, so 
we leave it off.)
 + */
 +public static final ReentrantReadWriteLock switchLock = new 
ReentrantReadWriteLock();
 +
 +// It is possible to call Keyspace.open without a running daemon, so it 
makes sense to ensure
 +// proper directories here as well as in CassandraDaemon.
 +static
 +{
 +if (!StorageService.instance.isClientMode())
 +DatabaseDescriptor.createAllDirectories();
 +}
 +
 +public final KSMetaData metadata;
 +
 +/* ColumnFamilyStore per column family */
 +private final ConcurrentMapUUID, ColumnFamilyStore columnFamilyStores = 
new ConcurrentHashMapUUID, ColumnFamilyStore();
 +private volatile AbstractReplicationStrategy replicationStrategy;
 +public static final FunctionString,Keyspace keyspaceTransformer = new 
FunctionString, Keyspace()
 +{
 +public Keyspace apply(String keyspaceName)
 +{
 +return Keyspace.open(keyspaceName);
 +}
 +};
 +
 +public static Keyspace open(String keyspaceName)
 +{
 +return open(keyspaceName, Schema.instance, true);
 +}
 +
 +public static Keyspace openWithoutSSTables(String keyspaceName)
 +{

[2/6] git commit: fix build

2013-11-26 Thread jbellis
fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc8a05ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc8a05ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc8a05ab

Branch: refs/heads/cassandra-2.0
Commit: cc8a05ab6ac22f019e60ec79c11338d4c77d49c3
Parents: 8145c83
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:27:52 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:27:52 2013 -0600

--
 src/java/org/apache/cassandra/db/Table.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc8a05ab/src/java/org/apache/cassandra/db/Table.java
--
diff --git a/src/java/org/apache/cassandra/db/Table.java 
b/src/java/org/apache/cassandra/db/Table.java
index a851eee..e6df982 100644
--- a/src/java/org/apache/cassandra/db/Table.java
+++ b/src/java/org/apache/cassandra/db/Table.java
@@ -275,9 +275,6 @@ public class Table
 
 public void createReplicationStrategy(KSMetaData ksm)
 {
-if (replicationStrategy != null)
-
StorageService.instance.getTokenMetadata().unregister(replicationStrategy);
-
 replicationStrategy = 
AbstractReplicationStrategy.createReplicationStrategy(ksm.name,

 ksm.strategyClass,

 StorageService.instance.getTokenMetadata(),



[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-26 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab4cc9c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab4cc9c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab4cc9c0

Branch: refs/heads/trunk
Commit: ab4cc9c007adb65631fb6f1dce614437e6856fd1
Parents: e68d466 5f62610
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:43 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:43 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab4cc9c0/src/java/org/apache/cassandra/locator/TokenMetadata.java
--



[2/6] git commit: fix build more

2013-11-26 Thread jbellis
fix build more


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f626109
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f626109
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f626109

Branch: refs/heads/cassandra-2.0
Commit: 5f62610969a83a1c33e06a3cf20a136961bd0a08
Parents: cc8a05a
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:37 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:37 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java 
b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
index da440af..9138bc2 100644
--- a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
+++ b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
@@ -188,7 +188,7 @@ public class PropertyFileSnitch extends 
AbstractNetworkTopologySnitch
 logger.debug(loaded network topology {}, 
FBUtilities.toString(reloadedMap));
 endpointMap = reloadedMap;
 if (StorageService.instance != null) // null check tolerates circular 
dependency; see CASSANDRA-4145
-StorageService.instance.getTokenMetadata().invalidateCaches();
+StorageService.instance.getTokenMetadata().invalidateCachedRings();
 
 if (gossipStarted)
 StorageService.instance.gossipSnitchInfo();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 818ca8f..b724894 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -1054,7 +1054,7 @@ public class TokenMetadata
 return ringVersion;
 }
 
-private void invalidateCachedRings()
+public void invalidateCachedRings()
 {
 ringVersion++;
 }



[1/6] git commit: fix build more

2013-11-26 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 cc8a05ab6 - 5f6261096
  refs/heads/cassandra-2.0 e68d466eb - ab4cc9c00
  refs/heads/trunk c384d31b8 - a13c6dcbb


fix build more


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f626109
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f626109
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f626109

Branch: refs/heads/cassandra-1.2
Commit: 5f62610969a83a1c33e06a3cf20a136961bd0a08
Parents: cc8a05a
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:37 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:37 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java 
b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
index da440af..9138bc2 100644
--- a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
+++ b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
@@ -188,7 +188,7 @@ public class PropertyFileSnitch extends 
AbstractNetworkTopologySnitch
 logger.debug(loaded network topology {}, 
FBUtilities.toString(reloadedMap));
 endpointMap = reloadedMap;
 if (StorageService.instance != null) // null check tolerates circular 
dependency; see CASSANDRA-4145
-StorageService.instance.getTokenMetadata().invalidateCaches();
+StorageService.instance.getTokenMetadata().invalidateCachedRings();
 
 if (gossipStarted)
 StorageService.instance.gossipSnitchInfo();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 818ca8f..b724894 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -1054,7 +1054,7 @@ public class TokenMetadata
 return ringVersion;
 }
 
-private void invalidateCachedRings()
+public void invalidateCachedRings()
 {
 ringVersion++;
 }



[3/6] git commit: fix build more

2013-11-26 Thread jbellis
fix build more


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f626109
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f626109
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f626109

Branch: refs/heads/trunk
Commit: 5f62610969a83a1c33e06a3cf20a136961bd0a08
Parents: cc8a05a
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:37 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:37 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java 
b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
index da440af..9138bc2 100644
--- a/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
+++ b/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
@@ -188,7 +188,7 @@ public class PropertyFileSnitch extends 
AbstractNetworkTopologySnitch
 logger.debug(loaded network topology {}, 
FBUtilities.toString(reloadedMap));
 endpointMap = reloadedMap;
 if (StorageService.instance != null) // null check tolerates circular 
dependency; see CASSANDRA-4145
-StorageService.instance.getTokenMetadata().invalidateCaches();
+StorageService.instance.getTokenMetadata().invalidateCachedRings();
 
 if (gossipStarted)
 StorageService.instance.gossipSnitchInfo();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f626109/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 818ca8f..b724894 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -1054,7 +1054,7 @@ public class TokenMetadata
 return ringVersion;
 }
 
-private void invalidateCachedRings()
+public void invalidateCachedRings()
 {
 ringVersion++;
 }



[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-26 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a13c6dcb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a13c6dcb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a13c6dcb

Branch: refs/heads/trunk
Commit: a13c6dcbb7a6ba74b27e50eef4bfd0f80eaea121
Parents: c384d31 ab4cc9c
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:50 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:50 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a13c6dcb/src/java/org/apache/cassandra/locator/PropertyFileSnitch.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a13c6dcb/src/java/org/apache/cassandra/locator/TokenMetadata.java
--



[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-26 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab4cc9c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab4cc9c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab4cc9c0

Branch: refs/heads/cassandra-2.0
Commit: ab4cc9c007adb65631fb6f1dce614437e6856fd1
Parents: e68d466 5f62610
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:34:43 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:34:43 2013 -0600

--
 src/java/org/apache/cassandra/locator/PropertyFileSnitch.java | 2 +-
 src/java/org/apache/cassandra/locator/TokenMetadata.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab4cc9c0/src/java/org/apache/cassandra/locator/TokenMetadata.java
--



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-26 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f15d6d59
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f15d6d59
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f15d6d59

Branch: refs/heads/trunk
Commit: f15d6d591a569eb399bcd2b472b65ed1f8e15b90
Parents: a13c6dc f15681b
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:39:47 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:39:47 2013 -0600

--
 src/java/org/apache/cassandra/locator/TokenMetadata.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f15d6d59/src/java/org/apache/cassandra/locator/TokenMetadata.java
--



[1/3] git commit: fix merge

2013-11-26 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 ab4cc9c00 - f15681b67
  refs/heads/trunk a13c6dcbb - f15d6d591


fix merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f15681b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f15681b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f15681b6

Branch: refs/heads/cassandra-2.0
Commit: f15681b6795d8a0b192100e37bec4080db2b7edb
Parents: ab4cc9c
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:39:21 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:39:21 2013 -0600

--
 src/java/org/apache/cassandra/locator/TokenMetadata.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f15681b6/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 47e91c9..635c010 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -976,7 +976,7 @@ public class TokenMetadata
 return sb.toString();
 }
 
-public CollectionInetAddress pendingEndpointsFor(Token token, String 
table)
+public CollectionInetAddress pendingEndpointsFor(Token token, String 
keyspaceName)
 {
 MapRangeToken, CollectionInetAddress ranges = 
getPendingRanges(keyspaceName);
 if (ranges.isEmpty())



[2/3] git commit: fix merge

2013-11-26 Thread jbellis
fix merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f15681b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f15681b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f15681b6

Branch: refs/heads/trunk
Commit: f15681b6795d8a0b192100e37bec4080db2b7edb
Parents: ab4cc9c
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:39:21 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:39:21 2013 -0600

--
 src/java/org/apache/cassandra/locator/TokenMetadata.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f15681b6/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 47e91c9..635c010 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -976,7 +976,7 @@ public class TokenMetadata
 return sb.toString();
 }
 
-public CollectionInetAddress pendingEndpointsFor(Token token, String 
table)
+public CollectionInetAddress pendingEndpointsFor(Token token, String 
keyspaceName)
 {
 MapRangeToken, CollectionInetAddress ranges = 
getPendingRanges(keyspaceName);
 if (ranges.isEmpty())



[6/7] git commit: merge from 1.2

2013-11-26 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e825905
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e825905
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e825905

Branch: refs/heads/cassandra-2.0
Commit: 8e82590506f0747780ed973db0b6afbd481a7c23
Parents: f15681b 3f66fbf
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:18:53 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:18:53 2013 -0600

--
 conf/cassandra-env.sh | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e825905/conf/cassandra-env.sh
--
diff --cc conf/cassandra-env.sh
index e8aa3a4,b5aea38..e229297
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@@ -192,11 -184,15 +192,9 @@@ f
  
  startswith() { [ ${1#$2} != $1 ]; }
  
 -if [ `uname` = Linux ] ; then
 -# reduce the per-thread stack size to minimize the impact of Thrift
 -# thread-per-client.  (Best practice is for client connections to
 -# be pooled anyway.) Only do so on Linux where it is known to be
 -# supported.
 -# u34 and greater need 180k
 -JVM_OPTS=$JVM_OPTS -Xss256k
 -fi
 +# Per-thread stack size.
 +JVM_OPTS=$JVM_OPTS -Xss256k
  
- echo xss = $JVM_OPTS
- 
  # GC tuning options
  JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC 
  JVM_OPTS=$JVM_OPTS -XX:+UseConcMarkSweepGC 



[5/7] git commit: merge from 1.2

2013-11-26 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e825905
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e825905
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e825905

Branch: refs/heads/trunk
Commit: 8e82590506f0747780ed973db0b6afbd481a7c23
Parents: f15681b 3f66fbf
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:18:53 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:18:53 2013 -0600

--
 conf/cassandra-env.sh | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e825905/conf/cassandra-env.sh
--
diff --cc conf/cassandra-env.sh
index e8aa3a4,b5aea38..e229297
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@@ -192,11 -184,15 +192,9 @@@ f
  
  startswith() { [ ${1#$2} != $1 ]; }
  
 -if [ `uname` = Linux ] ; then
 -# reduce the per-thread stack size to minimize the impact of Thrift
 -# thread-per-client.  (Best practice is for client connections to
 -# be pooled anyway.) Only do so on Linux where it is known to be
 -# supported.
 -# u34 and greater need 180k
 -JVM_OPTS=$JVM_OPTS -Xss256k
 -fi
 +# Per-thread stack size.
 +JVM_OPTS=$JVM_OPTS -Xss256k
  
- echo xss = $JVM_OPTS
- 
  # GC tuning options
  JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC 
  JVM_OPTS=$JVM_OPTS -XX:+UseConcMarkSweepGC 



[3/7] git commit: r/m debug output

2013-11-26 Thread jbellis
r/m debug output


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f66fbfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f66fbfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f66fbfc

Branch: refs/heads/trunk
Commit: 3f66fbfc63c728778325e3be958019a0da1b47d5
Parents: 5f62610
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:18:09 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:18:09 2013 -0600

--
 conf/cassandra-env.sh | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f66fbfc/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 52d91b6..b5aea38 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -192,7 +192,6 @@ if [ `uname` = Linux ] ; then
 # u34 and greater need 180k
 JVM_OPTS=$JVM_OPTS -Xss256k
 fi
-echo xss = $JVM_OPTS
 
 # GC tuning options
 JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC 



[1/7] git commit: add snapshot space used to cfstats patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6231

2013-11-26 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 5f6261096 - 3f66fbfc6
  refs/heads/cassandra-2.0 f15681b67 - 8e8259050
  refs/heads/trunk f15d6d591 - c1d7291c8


add snapshot space used to cfstats
patch by Mikhail Stepura; reviewed by jbellis for CASSANDRA-6231


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e178ff45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e178ff45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e178ff45

Branch: refs/heads/trunk
Commit: e178ff45c0510c56257c26da2dc8d082ba301522
Parents: f15d6d5
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 14:42:05 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 14:49:17 2013 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  5 ++
 .../cassandra/db/ColumnFamilyStoreMBean.java|  5 ++
 .../org/apache/cassandra/db/Directories.java| 90 +++-
 .../cassandra/metrics/ColumnFamilyMetrics.java  | 15 +++-
 .../org/apache/cassandra/tools/NodeCmd.java |  1 +
 .../org/apache/cassandra/tools/NodeProbe.java   |  1 +
 7 files changed, 114 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e178ff45/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7bb1fa0..00797e6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -18,6 +18,7 @@
 2.0.4
  * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
  * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
+ * Add snapshot space used to cfstats (CASSANDRA-6231)
 Merged from 1.2:
  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e178ff45/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 7037635..ccc15ab 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2356,4 +2356,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 PairReplayPosition, Long truncationRecord = 
SystemKeyspace.getTruncationRecords().get(metadata.cfId);
 return truncationRecord == null ? Long.MIN_VALUE : 
truncationRecord.right;
 }
+
+public long trueSnapshotsSize()
+{
+return directories.trueSnapshotsSize();
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e178ff45/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
index 1ca922b..fc1a7b1 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
@@ -344,4 +344,9 @@ public interface ColumnFamilyStoreMBean
  * @return ratio
  */
 public double getDroppableTombstoneRatio();
+
+/**
+ * @return the size of SSTables in snapshots subdirectory which aren't 
live anymore
+ */
+public long trueSnapshotsSize();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e178ff45/src/java/org/apache/cassandra/db/Directories.java
--
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 9795a27..ea5c2f4 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -17,16 +17,25 @@
  */
 package org.apache.cassandra.db;
 
+import static com.google.common.collect.Sets.newHashSet;
+
 import java.io.File;
 import java.io.FileFilter;
 import java.io.IOError;
 import java.io.IOException;
+import java.nio.file.FileVisitResult;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.SimpleFileVisitor;
+import java.nio.file.attribute.BasicFileAttributes;
 import java.util.*;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
 
 import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.ImmutableSet.Builder;
 import com.google.common.primitives.Longs;
 import com.google.common.util.concurrent.Uninterruptibles;
 

[7/7] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-26 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1d7291c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1d7291c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1d7291c

Branch: refs/heads/trunk
Commit: c1d7291c8d25808b76daebf4db1a55901f01f3ae
Parents: e178ff4 8e82590
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:19:11 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:19:11 2013 -0600

--
 conf/cassandra-env.sh | 2 --
 1 file changed, 2 deletions(-)
--




[2/7] git commit: r/m debug output

2013-11-26 Thread jbellis
r/m debug output


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f66fbfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f66fbfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f66fbfc

Branch: refs/heads/cassandra-1.2
Commit: 3f66fbfc63c728778325e3be958019a0da1b47d5
Parents: 5f62610
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:18:09 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:18:09 2013 -0600

--
 conf/cassandra-env.sh | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f66fbfc/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 52d91b6..b5aea38 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -192,7 +192,6 @@ if [ `uname` = Linux ] ; then
 # u34 and greater need 180k
 JVM_OPTS=$JVM_OPTS -Xss256k
 fi
-echo xss = $JVM_OPTS
 
 # GC tuning options
 JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC 



[4/7] git commit: r/m debug output

2013-11-26 Thread jbellis
r/m debug output


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f66fbfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f66fbfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f66fbfc

Branch: refs/heads/cassandra-2.0
Commit: 3f66fbfc63c728778325e3be958019a0da1b47d5
Parents: 5f62610
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Nov 26 15:18:09 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Nov 26 15:18:09 2013 -0600

--
 conf/cassandra-env.sh | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f66fbfc/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 52d91b6..b5aea38 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -192,7 +192,6 @@ if [ `uname` = Linux ] ; then
 # u34 and greater need 180k
 JVM_OPTS=$JVM_OPTS -Xss256k
 fi
-echo xss = $JVM_OPTS
 
 # GC tuning options
 JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC 



git commit: Fix cfstats not handling index CF

2013-11-26 Thread yukim
Updated Branches:
  refs/heads/trunk c1d7291c8 - 41325c346


Fix cfstats not handling index CF

patch by Mikhail Stepura; reviewed by yukim for CASSANDRA-6406


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41325c34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41325c34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41325c34

Branch: refs/heads/trunk
Commit: 41325c346ab644dd5760a88ce78aadef583d062e
Parents: c1d7291
Author: Mikhail Stepura mikhail.step...@outlook.com
Authored: Tue Nov 26 15:23:45 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Nov 26 15:36:48 2013 -0600

--
 CHANGES.txt   |  2 +-
 .../org/apache/cassandra/tools/NodeProbe.java | 18 ++
 2 files changed, 7 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41325c34/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 00797e6..70e6f2e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,7 +11,7 @@
  * Remove CFDefinition (CASSANDRA-6253)
  * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
  * User-defined types for CQL3 (CASSANDRA-5590)
- * Use of o.a.c.metrics in nodetool (CASSANDRA-5871)
+ * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
  * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41325c34/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index 2489de6..b755ff3 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -917,6 +917,8 @@ public class NodeProbe
 {
 try
 {
+String type = cf.contains(.) ? IndexColumnFamily: 
ColumnFamily;
+ObjectName oName = new 
ObjectName(String.format(org.apache.cassandra.metrics:type=%s,keyspace=%s,scope=%s,name=%s,
 type, ks, cf, metricName));
 switch(metricName)
 {
 case BloomFilterDiskSpaceUsed:
@@ -936,31 +938,23 @@ public class NodeProbe
 case RecentBloomFilterFalsePositives:
 case RecentBloomFilterFalseRatio:
 case SnapshotsSize:
-return JMX.newMBeanProxy(mbeanServerConn,
-new 
ObjectName(org.apache.cassandra.metrics:type=ColumnFamily,keyspace= + ks + 
,scope= + cf + ,name= + metricName),
-JmxReporter.GaugeMBean.class).getValue();
+return JMX.newMBeanProxy(mbeanServerConn, oName, 
JmxReporter.GaugeMBean.class).getValue();
 case LiveDiskSpaceUsed:
 case MemtableSwitchCount:
 case SpeculativeRetries:
 case TotalDiskSpaceUsed:
 case WriteTotalLatency:
 case ReadTotalLatency:
-return JMX.newMBeanProxy(mbeanServerConn,
-new 
ObjectName(org.apache.cassandra.metrics:type=ColumnFamily,keyspace= + ks + 
,scope= + cf + ,name= + metricName),
-JmxReporter.CounterMBean.class).getCount();
+return JMX.newMBeanProxy(mbeanServerConn, oName, 
JmxReporter.CounterMBean.class).getCount();
 case ReadLatency:
 case CoordinatorReadLatency:
 case CoordinatorScanLatency:
 case WriteLatency:
-return JMX.newMBeanProxy(mbeanServerConn,
-new 
ObjectName(org.apache.cassandra.metrics:type=ColumnFamily,keyspace= + ks + 
,scope= + cf + ,name= + metricName),
-JmxReporter.TimerMBean.class);
+return JMX.newMBeanProxy(mbeanServerConn, oName, 
JmxReporter.TimerMBean.class);
 case LiveScannedHistogram:
 case SSTablesPerReadHistogram:
 case TombstoneScannedHistogram:
-return JMX.newMBeanProxy(mbeanServerConn,
-new 
ObjectName(org.apache.cassandra.metrics:type=ColumnFamily,keyspace= + ks + 
,scope= + cf + ,name= + metricName),
-JmxReporter.HistogramMBean.class);
+return JMX.newMBeanProxy(mbeanServerConn, oName, 
JmxReporter.HistogramMBean.class);
 default:
 throw new RuntimeException(Unknown column family 
metric.);
 }



[jira] [Updated] (CASSANDRA-6086) Node refuses to start with exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers when find that some to be removed files are already removed

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6086:
--

Reviewer: Tyler Hobbs  (was: Yuki Morishita)
Assignee: Yuki Morishita  (was: Oleg Anastasyev)

 Node refuses to start with exception in 
 ColumnFamilyStore.removeUnfinishedCompactionLeftovers when find that some to 
 be removed files are already removed
 -

 Key: CASSANDRA-6086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6086
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Oleg Anastasyev
Assignee: Yuki Morishita
 Fix For: 2.0.4

 Attachments: 6086-v2.txt, removeUnfinishedCompactionLeftovers.txt


 Node refuses to start with
 {code}
 Caused by: java.lang.IllegalStateException: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables.
   at 
 org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:544)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:262)
 {code}
 IMO, there is no reason to refuse to start discivering files that must be 
 removed are already removed. It looks like pure bug diagnostic code and mean 
 nothing to operator (nor he can do anything about this).
 Replaced throw of excepion with dump of diagnostic warning and continue 
 startup.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6234) Add metrics for native protocols

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6234:
--

Reviewer: Sylvain Lebresne

 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura
 Attachments: CASSANDRA-2.0-6234.patch, 
 Oracle_Java_Mission_Control_2013-11-22_15-50-09.png


 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6389) Check first and last key to potentially skip SSTable for reads

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833143#comment-13833143
 ] 

Jonathan Ellis commented on CASSANDRA-6389:
---

The main collationcontroller path would be fine with an assert but there's a 
dozen or so other callers that would not be.  Going to leave it alone.

 Check first and last key to potentially skip SSTable for reads
 --

 Key: CASSANDRA-6389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6389
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Attachments: 6389.patch


 In {{SSTableReader.getPosition()}}, we use a -1 result from a binary search 
 on the index summary to check if the requested key falls before the start of 
 the sstable.  Instead, we can directly compare the requested key with the 
 {{first}} and {{last}} keys for the sstable, which will allow us to also skip 
 keys that fall after the last key in the sstable.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833148#comment-13833148
 ] 

Mikhail Stepura commented on CASSANDRA-5493:


[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6146) CQL-native stress

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833149#comment-13833149
 ] 

Jonathan Ellis commented on CASSANDRA-6146:
---

Can you make the Quickstart even quicker for people unfamiliar w/ JMeter like 
me?  I'm thinking here's how you insert a bunch of data and here's how you 
read a bunch of data.

 CQL-native stress
 -

 Key: CASSANDRA-6146
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6146
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis

 The existing CQL support in stress is not worth discussing.  We need to 
 start over, and we might as well kill two birds with one stone and move to 
 the native protocol while we're at it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6345) Endpoint cache invalidation causes CPU spike (on vnode rings?)

2013-11-26 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833150#comment-13833150
 ] 

Rick Branson commented on CASSANDRA-6345:
-

LGTM!

 Endpoint cache invalidation causes CPU spike (on vnode rings?)
 --

 Key: CASSANDRA-6345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6345
 Project: Cassandra
  Issue Type: Bug
 Environment: 30 nodes total, 2 DCs
 Cassandra 1.2.11
 vnodes enabled (256 per node)
Reporter: Rick Branson
Assignee: Jonathan Ellis
 Fix For: 1.2.13, 2.0.4

 Attachments: 6345-rbranson-v2.txt, 6345-rbranson.txt, 6345-v2.txt, 
 6345-v3.txt, 6345-v4.txt, 6345-v5.txt, 6345.txt, 
 half-way-thru-6345-rbranson-patch-applied.png


 We've observed that events which cause invalidation of the endpoint cache 
 (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy 
 result in several seconds of thundering herd behavior on the entire cluster. 
 A thread dump shows over a hundred threads (I stopped counting at that point) 
 with a backtrace like this:
 at java.net.Inet4Address.getAddress(Inet4Address.java:288)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
 at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
 at java.util.TreeMap.getEntry(TreeMap.java:322)
 at java.util.TreeMap.get(TreeMap.java:255)
 at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
 at 
 com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
 at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
 at 
 com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
 at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
 at 
 org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
 at 
 org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
 at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)
 It looks like there's a large amount of cost in the 
 TokenMetadata.cloneOnlyTokenMap that 
 AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is 
 a cache miss for an endpoint. It seems as if this would only impact clusters 
 with large numbers of tokens, so it's probably a vnodes-only issue.
 Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the 
 cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), 
 wrapping it with a lock to prevent stampedes, and clearing it in 
 clearEndpointCache(). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833251#comment-13833251
 ] 

Jason Harvey commented on CASSANDRA-6405:
-

Just did some analysis under normal conditions. Typically, our nodes have less 
than 200k instances of org.apache.cassandra.db.CounterColumn. During this issue 
we had nearly 6 million instances, as shown above.

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey

 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160

[jira] [Updated] (CASSANDRA-3578) Multithreaded commitlog

2013-11-26 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-3578:


Attachment: oprate.svg
latency.svg

A patch for this is available for review at 
[3578-2|https://github.com/belliottsmith/cassandra/tree/iss-3578-2]

Already discussed:
- Chained headers
- Ensures commits are persistent, using the suggested synchronisation scheme 
(read/write lock)

Further changes:
- Writes are completely non-blocking unless the CLE is behind or you're using 
Batch CLE
- On activating a new CLS, we trigger a sync() of the log; so now we sync() 
ever pollInterval elapsed, OR commit_log_segment_size_in_mb written, whichever 
condition is met first after the previous sync. This allows us to stay a little 
ahead of pollInterval, giving us some breathing room during brief spikes in 
write load in excess of what the disk can handle.
- Once we've completely written a CLS we immediately close/unmap the buffer
- On any drop keyspace or column family command, or on a node drain, we force 
the recycling of any CLS in use at the time of the call (this addresses 
CASSANDRA-5911. I included it in this ticket as it was easier to think about 
both at once)

Some implementation detail changes:
- We maintain a separate cfDirty and cfClean set now, which we merge on demand, 
to avoid allocating/deallocating AtomicIntegers all of the time
- We now reject row mutations that are only HALF the size of the CL, as opposed 
to equal in size - this is to stop burning through lots of CLS if we try to 
switch to a new segment but then are beaten to allocating the first item in it.

Some future work:
- Could reasonably easily have a guaranteed non-blocking CL.add method, which 
yields a Future if blocking becomes necessary; this could allow us to 
short-circuit the write-path a little to reduce latency in the majority of 
cases where blocking doesn't happen
- Compressed CL to improve IO
- Need to improve error handling in CL in general

Note, Vijay, that I briefly switched to a simpler blocking approach to 
switching in a new segment, as you suggested you preferred the simpler 
approach, but I decided to revert to non-blocking, due to potential future 
dividends with this guarantee.

I've attached two graphs to demonstrate the effect of this patch in a real 
4-node cluster. Note the latency graph has a logarithmic y-axis, so this patch 
looks to be an order of magnitude better at worst write latency measured; also 
variance in latency at the tail end is lower. This is also why there are fewer 
measurements, as the stderr of the measurements was smaller, so stress finished 
earlier. Also a roughly 12% increase in maximum throughput on this particular 
cluster.

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 Current-CL.png, Multi-Threded-CL.png, latency.svg, oprate.svg, 
 parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6199) Improve Stress Tool

2013-11-26 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833279#comment-13833279
 ] 

Benedict commented on CASSANDRA-6199:
-

Some further improvements to this patch now in the git repo:
- Java Driver support now baked in (still no arbitrary CQL support though)
- Smart thrift routing - piggybacking off of Java Driver, to direct thrift 
queries

 Improve Stress Tool
 ---

 Key: CASSANDRA-6199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6199
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Attachments: new.read.latency.svg, new.read.rate.distribution.svg, 
 new.write.latency.svg, new.write.rate.distribution.svg, old.read.latency.svg, 
 old.read.rate.distribution.svg, old.write.latency.svg, 
 old.write.rate.distribution.svg, ops.read.svg, ops.write.svg


 The stress tool could do with sprucing up. The following is a list of 
 essential improvements and things that would be nice to have.
 Essential:
 - Reduce variability of results, especially start/end tails. Do not trash 
 first/last 10% of readings
 - Reduce contention/overhead in stress to increase overall throughput
 - Short warm-up period, which is ignored for summary (or summarised 
 separately), though prints progress as usual. Potentially automatic detection 
 of rate levelling.
 - Better configurability and defaults for data generation - current column 
 generation populates columns with the same value for every row, which is very 
 easily compressible. Possibly introduce partial random data generator 
 (possibly dictionary-based random data generator)
 Nice to have:
 - Calculate and print stdev and mean
 - Add batched sequential access mode (where a single thread performs 
 batch-size sequential requests before selecting another random key) to test 
 how key proximity affects performance
 - Auto-mode which attempts to establish the maximum throughput rate, by 
 varying the thread count (or otherwise gating the number of parallel 
 requests) for some period, then configures rate limit or thread count to test 
 performance at e.g. 30%, 50%, 70%, 90%, 120%, 150% and unconstrained.
 - Auto-mode could have a target variance ratio for mean throughput and/or 
 latency, and completes a test once this target is hit for x intervals
 - Fix key representation so independent of number of keys (possibly switch to 
 10 digit hex), and don't use String.format().getBytes() to construct it 
 (expensive)
 Also, remove the skip-key setting, as it is currently ignored. Unless 
 somebody knows the reason for it.
 - Fix latency stats
 - Read/write mode, with configurable recency-of-reads distribution
 - Add new exponential/extreme value distribution for value size, column count 
 and recency-of-reads
 - Support more than 2^31 keys
 - Supports multiple concurrent stress inserts via key-offset parameter or 
 similar



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6408) Efficient multi-partition mutations

2013-11-26 Thread Rick Branson (JIRA)
Rick Branson created CASSANDRA-6408:
---

 Summary: Efficient multi-partition mutations
 Key: CASSANDRA-6408
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6408
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson


At the SF Summit this year, Sylvain suggested that C* drops a very large amount 
of write throughput on the floor for multi-partition mutations because they are 
broken into RowMutations and executed individually. Stress tests that I've run 
show 10X the throughput for 1-row x 1000-col writes versus 1000-row x 1-col 
writes. We have a core high-write-skew use case which involves fan-out-on-write 
against hundreds or up to thousands of keys at a time currently implemented in 
Redis as it doesn't seem to suffer from the issue. Would love to be able to 
move this to C* at some point.

This is likely a pretty large undertaking as it would require touching a large 
portion of the write path, but I figure I'd put it here for comment and/or 
debate at this point.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6409) gossip performance improvement at node startup

2013-11-26 Thread Quentin Conner (JIRA)
Quentin Conner created CASSANDRA-6409:
-

 Summary: gossip performance improvement at node startup
 Key: CASSANDRA-6409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6409
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner


With large clusters ( 500 nodes) and num_tokens  255 we sometimes see a node 
have trouble starting up.  CPU usage for one thread is pegged.

We see this concurrent with Gossip flaps on the node trying to learn the ring 
topology.  Other nodes on the ring, that are already at steady state do not 
seem to suffer.  It is the node joining the large ring that has trouble.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6409) gossip performance improvement at node startup

2013-11-26 Thread Quentin Conner (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833316#comment-13833316
 ] 

Quentin Conner commented on CASSANDRA-6409:
---

sub ticket for the cpu peg at startup symptom.

 gossip performance improvement at node startup
 --

 Key: CASSANDRA-6409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6409
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner

 With large clusters ( 500 nodes) and num_tokens  255 we sometimes see a 
 node have trouble starting up.  CPU usage for one thread is pegged.
 We see this concurrent with Gossip flaps on the node trying to learn the ring 
 topology.  Other nodes on the ring, that are already at steady state do not 
 seem to suffer.  It is the node joining the large ring that has trouble.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6409) gossip performance improvement at node startup

2013-11-26 Thread Quentin Conner (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quentin Conner updated CASSANDRA-6409:
--

Attachment: 2013-11-26_17-40-08.png

Taken about 10 minutes after node startup.  Gossip should have settled down by 
now.

 gossip performance improvement at node startup
 --

 Key: CASSANDRA-6409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6409
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner
 Attachments: 2013-11-26_17-40-08.png


 With large clusters ( 500 nodes) and num_tokens  255 we sometimes see a 
 node have trouble starting up.  CPU usage for one thread is pegged.
 We see this concurrent with Gossip flaps on the node trying to learn the ring 
 topology.  Other nodes on the ring, that are already at steady state do not 
 seem to suffer.  It is the node joining the large ring that has trouble.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6410) gossip memory usage improvement

2013-11-26 Thread Quentin Conner (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833344#comment-13833344
 ] 

Quentin Conner commented on CASSANDRA-6410:
---

Will see if we can find the responsible class with memory profiling

 gossip memory usage improvement
 ---

 Key: CASSANDRA-6410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6410
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner

 It looks to me that any given node will need ~2 MB of Java VM heap for each 
 other node in the ring.  This was observed with num_tokens=512 but still 
 seems excessive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6410) gossip memory usage improvement

2013-11-26 Thread Quentin Conner (JIRA)
Quentin Conner created CASSANDRA-6410:
-

 Summary: gossip memory usage improvement
 Key: CASSANDRA-6410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6410
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner


It looks to me that any given node will need ~2 MB of Java VM heap for each 
other node in the ring.  This was observed with num_tokens=512 but still seems 
excessive.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6409) gossip performance improvement at node startup

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6409:
--

Attachment: endpointToTokenMapCPU.txt

Patch attached to move the Multimap computation into only the block where it is 
used and not each gossip update.

 gossip performance improvement at node startup
 --

 Key: CASSANDRA-6409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6409
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner
 Attachments: 2013-11-26_17-40-08.png, endpointToTokenMapCPU.txt


 With large clusters ( 500 nodes) and num_tokens  255 we sometimes see a 
 node have trouble starting up.  CPU usage for one thread is pegged.
 We see this concurrent with Gossip flaps on the node trying to learn the ring 
 topology.  Other nodes on the ring, that are already at steady state do not 
 seem to suffer.  It is the node joining the large ring that has trouble.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6410) gossip memory usage improvement

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6410:
--

Attachment: gossip-intern.txt

Patch to intern VersionedValue keys.  (Making them enum would be even better 
but then we'd lose the ability to cheat and extend Gossip w/ arbitrary 
payloads.)

 gossip memory usage improvement
 ---

 Key: CASSANDRA-6410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6410
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner
 Attachments: gossip-intern.txt


 It looks to me that any given node will need ~2 MB of Java VM heap for each 
 other node in the ring.  This was observed with num_tokens=512 but still 
 seems excessive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2013-11-26 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6271:


Attachment: oprate.svg

I've uploaded a patch to 
[6271|https://github.com/belliottsmith/cassandra/tree/iss-6271].

This patch replaces AtomicSortedColumns with AtomicBTreeColumns in Memtable. 
The BTree is a custom CoW job, that minimises memory utilisation without 
sacrificing performance. Modifications are performed in an actual batch (as 
opposed to a simulated batch as currently the case), using a builder to 
construct a new tree from as many parts of the old tree as possible. The 
fan-factor is currently set to 32, which was both experimentally and logically 
a good choice (I was expecting 16-32, to ensure we don't have too many cache 
lines for search in a given node, nor high merge costs).

Each node, and the BTree itself, are represented as just an Object[], with 
utility functions to operate on the root node. This is a little anti-OOP, but I 
don't think a full fledged Set object is either called for or helpful here, as 
we have only a small number of operations we perform on the trees, and we'll 
only use it in fairly controlled circumstances; and this offers us some further 
memory savings.

In synthetic benchmarks, I found this BTree to be slower than TreeSet by around 
25-50%, but found that SnapTreeMap (when used in the same manner we use it in 
the code) was a similar degree slower again than the BTree, which was a good 
start. This is despite trying my best to induce worst-case behaviour from the 
BTree, and was persistent across all the tests I performed.

The stress benchmarks are pretty promising too. I've attached graphs of this 
patch against trunk, and against this patch combined with my patch for 3578. 
The result is roughly 50% greater throughput for writes, with both patches. In 
fact, it got to the point where a node was flat out just running stress against 
the 4x cluster. I think we can push this a little further with a patch for 
switch lock removal, but we'll probably only see a small uptick from that. I 
also tested the patch for highly contended writes of a small number of rows, 
and found no measureable difference in performance.

Note that this patch does not currently implement iterator removal for the 
BTree. I'm not sure yet how is best to do it, and am leaning towards a simple 
replacement of the Column that's deleted with a special DeletedColumn, that is 
filtered out on iteration. This would mean .size() would either have to be 
slower or inaccurate, so I need to do some analysis to determine if this is 
okay, or if I should bite the bullet and implement a full batch delete 
operation on the BTree, but this is a bit of a pig to do whilst ensuring a 
balanced tree, and sometimes wasteful. I could also not balance the tree, or 
only make a 'best effort' that avoids re-allocating the same node multiple 
times even if it would result in imbalance. 

Suggestions welcome.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6410) gossip memory usage improvement

2013-11-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6410:
--

Attachment: 6410-EnumMap.txt

Additional patch to reduce the NBHM overhead.

 gossip memory usage improvement
 ---

 Key: CASSANDRA-6410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6410
 Project: Cassandra
  Issue Type: Bug
Reporter: Quentin Conner
 Attachments: 6410-EnumMap.txt, gossip-intern.txt


 It looks to me that any given node will need ~2 MB of Java VM heap for each 
 other node in the ring.  This was observed with num_tokens=512 but still 
 seems excessive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2013-11-26 Thread Jason Harvey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833450#comment-13833450
 ] 

Jason Harvey commented on CASSANDRA-6405:
-

I have verified that an instance which exhibited the high instance count of 
CounterColumn classes returned to a lower count (from 5.5m to 180k) after the 
issue resolved itself, without a restart.

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey

 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 

[jira] [Comment Edited] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833148#comment-13833148
 ] 

Mikhail Stepura edited comment on CASSANDRA-5493 at 11/27/13 6:04 AM:
--

[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

I'm asking because trying to figure out what's your setup. There quite a lot 
options for IP addresses and for places in YAML where to put them

* Addresses
** Amazon private
** Amazon public
** Amazon elastic?
** Other?

* Settings
** Broadcast address
** Listen address
** IPs for seeds (Snitch settings)
** RPC address
 



was (Author: mishail):
[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833148#comment-13833148
 ] 

Mikhail Stepura edited comment on CASSANDRA-5493 at 11/27/13 6:04 AM:
--

[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

I'm asking because trying to figure out what's your setup. 
There are quite a lot options for IP addresses and for places in YAML where to 
put them as well

* Addresses
** Amazon private
** Amazon public
** Amazon elastic?
** Other?

* Settings
** Broadcast address
** Listen address
** IPs for seeds (Snitch settings)
** RPC address
 



was (Author: mishail):
[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

I'm asking because trying to figure out what's your setup. There quite a lot 
options for IP addresses and for places in YAML where to put them

* Addresses
** Amazon private
** Amazon public
** Amazon elastic?
** Other?

* Settings
** Broadcast address
** Listen address
** IPs for seeds (Snitch settings)
** RPC address
 


 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833148#comment-13833148
 ] 

Mikhail Stepura edited comment on CASSANDRA-5493 at 11/27/13 6:06 AM:
--

[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

I'm asking because trying to figure out what's your setup. 
There are quite a lot options for IP addresses and for places in YAML where to 
put them as well

* Addresses
** Amazon private
** Amazon public
** Amazon elastic?
** Other?

* Settings
** Broadcast address
** Listen address
** IPs for seeds (seed provider)
** RPC address
 



was (Author: mishail):
[~ondrej.cernos] what is your seeds configuration? Which IP addresses do you 
use (for seeds) in _cassandra.yaml_?

I'm asking because trying to figure out what's your setup. 
There are quite a lot options for IP addresses and for places in YAML where to 
put them as well

* Addresses
** Amazon private
** Amazon public
** Amazon elastic?
** Other?

* Settings
** Broadcast address
** Listen address
** IPs for seeds (Snitch settings)
** RPC address
 


 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2013-11-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833506#comment-13833506
 ] 

Jonathan Ellis commented on CASSANDRA-6271:
---

bq. Note that this patch does not currently implement iterator removal for the 
BTree.

I think the only place where we call remove on a CF iterator is in SQF.trim, 
which is only going to be operating on a read only CF implementation and not 
ASC/ABTC.  So it should actually be totally fine to just stub out 
{{iterator()}} with UnsupportedOperation.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2013-11-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833512#comment-13833512
 ] 

Aleksey Yeschenko commented on CASSANDRA-6271:
--

Also CFS.removeDeletedColumnsOnly()/removeDroppedColumns(), that are called 
during memtable flush, among other places.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >