[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965066#comment-13965066
 ] 

Pavel Yaskevich commented on CASSANDRA-6694:


[~jbellis] I will leave this a alone if you and others are fine with maintaing 
the code as it is in the patch set. Discussion I'm trying to have, and I 
presume others are interested too, centered around the question - if there is a 
better (cleaner if you will) way to organize Cell to avoid unnecessary field 
allocation as well as keeping us from introduction of static Impl classes with 
only static methods inside that extend each other, I still don't understand why 
we would extend one class, that has only static methods, from another with the 
same method layout (e.g. DeletedCell.Impl extends Cell.Impl) which results in 
bigger constants pool per class and has byte code implications that I have 
previously described. From my point of view, it looks like we are basically 
trying to re-build inside of Cassandra what JVM already provides as a platform.

 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6997) Startup Error

2014-04-10 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965090#comment-13965090
 ] 

Brandon Williams commented on CASSANDRA-6997:
-

There's only one node? That doesn't make any sense.

 Startup Error
 -

 Key: CASSANDRA-6997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6997
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago)
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 RAM : 120 GB
 CPU core : 16
 Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
Reporter: Varun Tahin
Priority: Minor
 Fix For: 2.0.4


 ERROR Log  : 
 root@atca11 bin]# ERROR 11:28:46,298 Exception in thread 
 Thread[Thread-2,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,329 Exception in thread Thread[Thread-3,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,337 Exception in thread Thread[Thread-4,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,345 Exception in thread Thread[Thread-5,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,368 Exception in thread Thread[Thread-6,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6997) Startup Error

2014-04-10 Thread Varun Tahin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965102#comment-13965102
 ] 

Varun Tahin commented on CASSANDRA-6997:


Yes only one node.
In one node also the error should not come right?




 Startup Error
 -

 Key: CASSANDRA-6997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6997
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago)
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 RAM : 120 GB
 CPU core : 16
 Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
Reporter: Varun Tahin
Priority: Minor
 Fix For: 2.0.4


 ERROR Log  : 
 root@atca11 bin]# ERROR 11:28:46,298 Exception in thread 
 Thread[Thread-2,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,329 Exception in thread Thread[Thread-3,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,337 Exception in thread Thread[Thread-4,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,345 Exception in thread Thread[Thread-5,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,368 Exception in thread Thread[Thread-6,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6997) Startup Error

2014-04-10 Thread Varun Tahin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965216#comment-13965216
 ] 

Varun Tahin commented on CASSANDRA-6997:


Hello Brandon,

Have  you  tested apache cassandra 2.0.4 on  RHEL 6.3?

Or we need any packages on RHEL 6.3 to run cassandra 2.0.4? 





 Startup Error
 -

 Key: CASSANDRA-6997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6997
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago)
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 RAM : 120 GB
 CPU core : 16
 Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
Reporter: Varun Tahin
Priority: Minor
 Fix For: 2.0.4


 ERROR Log  : 
 root@atca11 bin]# ERROR 11:28:46,298 Exception in thread 
 Thread[Thread-2,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,329 Exception in thread Thread[Thread-3,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,337 Exception in thread Thread[Thread-4,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,345 Exception in thread Thread[Thread-5,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,368 Exception in thread Thread[Thread-6,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7019) Major tombstone compaction

2014-04-10 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-7019:
--

 Summary: Major tombstone compaction
 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson


It should be possible to do a major tombstone compaction by including all 
sstables, but writing them out 1:1, meaning that if you have 10 sstables 
before, you will have 10 sstables after the compaction with the same data, 
minus all the expired tombstones.

We could do this in two ways:
# a nodetool command that includes _all_ sstables
# once we detect that an sstable has more than x% (20%?) expired tombstones, we 
start one of these compactions, and include all overlapping sstables that 
contain older data.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7020) Incorrect result of query WHERE token(key) -9223372036854775808 when using Murmur3Partitioner

2014-04-10 Thread JIRA
Piotr Kołaczkowski created CASSANDRA-7020:
-

 Summary: Incorrect result of query WHERE token(key)  
-9223372036854775808 when using Murmur3Partitioner
 Key: CASSANDRA-7020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7020
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.0.6-snapshot 
Reporter: Piotr Kołaczkowski


{noformat}
cqlsh:test1 select * from test where token(key)  -9223372036854775807;

(0 rows)

cqlsh:test1 select * from test where token(key)  -9223372036854775808;

 key | value
-+--
   5 |   ee
  10 |j
   1 | 
   8 | 
   2 |  bbb
   4 |   dd
   7 | 
   6 |  fff
   9 | 
   3 |c
{noformat}

Expected: empty result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6933) Optimise Read Comparison Costs in collectTimeOrderedData

2014-04-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6933:


Attachment: 6933.v5fix.txt

Just realised: we were all so busy dissecting the extra binary search that we 
missed a bug with the initial ordered comparison. If the next name queried is 
*less* than the next name in the ABSC, we will still increment our counter past 
the next name (when we should leave it where it is).

Attached a super simple diff that fixes this.

 Optimise Read Comparison Costs in collectTimeOrderedData
 

 Key: CASSANDRA-6933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6933
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: 6933-v3.txt, 6933-v4.txt, 6933-v5.txt, 6933.v5fix.txt


 Introduce a new SearchIterator construct, which can be obtained from a 
 ColumnFamily, which permits efficiently iterating a subset of the cells in 
 ascending order. Essentially, it saves the previously visited position and 
 searches from there, but also tries to avoid searching the whole remaining 
 space if possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (CASSANDRA-6933) Optimise Read Comparison Costs in collectTimeOrderedData

2014-04-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reopened CASSANDRA-6933:
-


 Optimise Read Comparison Costs in collectTimeOrderedData
 

 Key: CASSANDRA-6933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6933
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: 6933-v3.txt, 6933-v4.txt, 6933-v5.txt, 6933.v5fix.txt


 Introduce a new SearchIterator construct, which can be obtained from a 
 ColumnFamily, which permits efficiently iterating a subset of the cells in 
 ascending order. Essentially, it saves the previously visited position and 
 searches from there, but also tries to avoid searching the whole remaining 
 space if possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6981) java.io.EOFException from Cassandra when doing select

2014-04-10 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965328#comment-13965328
 ] 

Martin Bligh commented on CASSANDRA-6981:
-

So this is a little tricky, because it's proprietary data and I've changed 
things around a bit since then. Basically what I was doing was on a desktop 
machine with 32GB of RAM and just one disk (regular HDD, not SSD), created 
about 16 tables, all the same, each with about 5 text fields and 5 binary 
fields. Most of those fields had a secondary index. Then insert into all the 
tables in parallel. 

I'm aware this isn't a great design scheme, but it certainly shouldn't fall 
over like this 



 java.io.EOFException from Cassandra when doing select
 -

 Key: CASSANDRA-6981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6981
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.06, Oracle Java version 1.7.0_51, Linux 
 Mint 16
Reporter: Martin Bligh

 Cassandra 2.06, Oracle Java version 1.7.0_51, Linux Mint 16
 I have a cassandra keyspace with about 12 tables that are all the same.
 If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
 works fine.
 If I load a larger dataset, after a while one of the tables won't do lookups 
 any more (not always the same one).
 {noformat}
 SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
 {noformat}
 results in Request did not complete within rpc_timeout.
 where mid is the primary key (varchar). If I look at the logs, it has an 
 EOFException ... presumably it's running out of some resource (it's 
 definitely not out of disk space)
 Sometimes it does this on secondary indexes too: dropping and rebuilding the 
 index will fix it for a while. When it's broken, it seems like only one 
 particular lookup key causes timeouts (and the EOFException every time) - 
 other lookups work fine. I presume the index is corrupt somehow.
 {noformat}
 ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
 Exception in thread Thread[ReadStage:110,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at 

[jira] [Comment Edited] (CASSANDRA-6981) java.io.EOFException from Cassandra when doing select

2014-04-10 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965328#comment-13965328
 ] 

Martin Bligh edited comment on CASSANDRA-6981 at 4/10/14 1:23 PM:
--

So this is a little tricky, because it's proprietary data and I've changed 
things around a bit since then. Basically what I was doing was on a desktop 
machine with 32GB of RAM and just one disk (regular HDD, not SSD), created 
about 16 tables, all the same, each with about 5 text fields and 5 binary 
fields. Most of those fields had a secondary index. Then insert into all the 
tables in parallel. 

I'm aware this isn't a great design scheme, but it certainly shouldn't fall 
over like this 

PS. I've never seen this again since disabling mmap access, but am fairly sure 
that's great for performance.


was (Author: mbligh):
So this is a little tricky, because it's proprietary data and I've changed 
things around a bit since then. Basically what I was doing was on a desktop 
machine with 32GB of RAM and just one disk (regular HDD, not SSD), created 
about 16 tables, all the same, each with about 5 text fields and 5 binary 
fields. Most of those fields had a secondary index. Then insert into all the 
tables in parallel. 

I'm aware this isn't a great design scheme, but it certainly shouldn't fall 
over like this 



 java.io.EOFException from Cassandra when doing select
 -

 Key: CASSANDRA-6981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6981
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.06, Oracle Java version 1.7.0_51, Linux 
 Mint 16
Reporter: Martin Bligh

 Cassandra 2.06, Oracle Java version 1.7.0_51, Linux Mint 16
 I have a cassandra keyspace with about 12 tables that are all the same.
 If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
 works fine.
 If I load a larger dataset, after a while one of the tables won't do lookups 
 any more (not always the same one).
 {noformat}
 SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
 {noformat}
 results in Request did not complete within rpc_timeout.
 where mid is the primary key (varchar). If I look at the logs, it has an 
 EOFException ... presumably it's running out of some resource (it's 
 definitely not out of disk space)
 Sometimes it does this on secondary indexes too: dropping and rebuilding the 
 index will fix it for a while. When it's broken, it seems like only one 
 particular lookup key causes timeouts (and the EOFException every time) - 
 other lookups work fine. I presume the index is corrupt somehow.
 {noformat}
 ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
 Exception in thread Thread[ReadStage:110,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
 at 
 

[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965376#comment-13965376
 ] 

Marcus Eriksson commented on CASSANDRA-6696:


btw, being able to not care about locations while compacting means we can't 
really keep having a separate flush directory, since the data flushed to a 
directory will stay there forever, wdyt, is it worth keeping flush directories 
and DiskAwareWriter everywhere or should we drop support for separate flush 
dir? With flushing being spread out on all disks, the advantages of having a 
separate flush dir are not as big.

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2014-04-10 Thread Oleg Poleshuk (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965394#comment-13965394
 ] 

Oleg Poleshuk commented on CASSANDRA-5797:
--

Is there a way to change serial consistency level from CQL / Java driver?
http://stackoverflow.com/questions/22666911/cassandra-cql-consistency-during-cas-operations

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0 beta 1
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0 rc1

 Attachments: 0001-Thrift-generated-files.txt, 
 0002-Add-LOCAL_SERIAL-CL.txt, 0003-CQL-and-native-protocol-changes.txt


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (CASSANDRA-5797) DC-local CAS

2014-04-10 Thread Oleg Poleshuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Poleshuk updated CASSANDRA-5797:
-

Comment: was deleted

(was: Is there a way to change serial consistency level from CQL / Java driver?
http://stackoverflow.com/questions/22666911/cassandra-cql-consistency-during-cas-operations)

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0 beta 1
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0 rc1

 Attachments: 0001-Thrift-generated-files.txt, 
 0002-Add-LOCAL_SERIAL-CL.txt, 0003-CQL-and-native-protocol-changes.txt


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6997) Startup Error

2014-04-10 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965408#comment-13965408
 ] 

Brandon Williams commented on CASSANDRA-6997:
-

IncomingTcpConnection wouldn't even get used with a single node, unless some 
other process is trying to connect to it for some reason.

 Startup Error
 -

 Key: CASSANDRA-6997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6997
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago)
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 RAM : 120 GB
 CPU core : 16
 Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz
Reporter: Varun Tahin
Priority: Minor
 Fix For: 2.0.4


 ERROR Log  : 
 root@atca11 bin]# ERROR 11:28:46,298 Exception in thread 
 Thread[Thread-2,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,329 Exception in thread Thread[Thread-3,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,337 Exception in thread Thread[Thread-4,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,345 Exception in thread Thread[Thread-5,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)
 ERROR 11:28:46,368 Exception in thread Thread[Thread-6,5,main]
 java.lang.UnsupportedOperationException: Unable to read obsolete message 
 version 4; the earliest version supported is 1.2.0
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleLegacyVersion(IncomingTcpConnection.java:136)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:72)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-04-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965418#comment-13965418
 ] 

Benedict commented on CASSANDRA-6696:
-

+1 on dropping separate flush dir. This is a better solution IMO - get full 
parallelism of the disks available.

bq. do you mean having a background job move data around after upgrade

Yes, I think this would be preferable. Blocking at startup would make a rolling 
upgrade much too painful. If we mark all old sstables as compacting at startup, 
we can safely rewrite them in the background, and not worry about them 
violating our assumptions/constraints, since they're not eligible for regular 
compaction.


 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7021) With tracing enabled, queries should still be recorded when using prepared and batch statements

2014-04-10 Thread Bill Joyce (JIRA)
Bill Joyce created CASSANDRA-7021:
-

 Summary: With tracing enabled, queries should still be recorded 
when using prepared and batch statements
 Key: CASSANDRA-7021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7021
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: C* 2.0.6 running on Ubuntu 12.04
Reporter: Bill Joyce
Priority: Minor


I've enabled tracing on my cluster and am analyzing data in the 
system_traces.sessions table. Single statement, non-prepared queries show up 
with data in the 'parameters' field like 'query=select * from tablename where 
x=1' and the request field is execute_cql3_query. But batches have null in the 
parameters field and prepared statements just have 'page size=5000' in the 
parameters field (the request field values are 'Execute batch of CQL3 queries' 
and 'Execute CQL3 prepared query'). Please include the actual query text with 
prepared and batch statements. This will make performance analysis much easier 
so I can do things like sort by duration and find my most expensive queries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7022) Consider an option to reduce tracing cost by writing only to sessions table (not events)

2014-04-10 Thread Bill Joyce (JIRA)
Bill Joyce created CASSANDRA-7022:
-

 Summary: Consider an option to reduce tracing cost by writing only 
to sessions table (not events)
 Key: CASSANDRA-7022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7022
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: C* 2.0.6 running on Ubuntu 12.04
Reporter: Bill Joyce
Priority: Minor


With MySQL and SQL Server, I can profile all queries in high traffic production 
environments. I'm assuming the bulk of the C* tracing cost comes in writing to 
the system_traces.events table, so it would be great to have an option to write 
just the system_traces.session info if that allows me to run 'nodetool 
settraceprobability' with a higher probability (ideally a probability of 1). 
This along with CASSANDRA-7021 would go a long way in giving us performance 
analysis closer to what can be done with more mature back ends.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965465#comment-13965465
 ] 

Jonathan Ellis commented on CASSANDRA-6694:
---

bq. why we can't have a simple implementation of the cell which has one buffer 
+ metadata about component sizes (which could also be encoded) instead of 
having buffer per component in the name (if composite) + buffer for value + 
long timestamp

I think this is the key question so I want to back out of the Imple rabbit hole 
for a minute to address that.  This would absolutely simplify things a great 
deal in terms of the Allocator design.  The problem is that it has a much 
bigger impact on the rest of the code, and the consensus from the last ticket 
was, We want to have off-heap as an option, but we want the default to stay 
on-heap and change as little as possible.  So, I agree that what you are 
saying is cleaner but I think we should push it out to 3.0 given the 
constraints for 2.1.


 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965481#comment-13965481
 ] 

Jonathan Ellis commented on CASSANDRA-6694:
---

bq. Can we decide if we actually want to have Cell (and derivatives) as this 
patch set proposes (with static Impl static classes which is OOP unfriendly to 
say the least) or do something else (question raised back in CASSANDRA-6689)?

If we accept the NativeCell/BufferCell distinction above, then the combination 
of optimization and lack of multiple inheritance drives this design or 
something like it.  Specifically, we want NativeCell to be both a Cell and a 
NativeAllocation, so Benedict has (reasonably, IMO) chosen to extend NA and 
leave the Cell common methods in a utility Impl class.  (IMO the right OOP 
approach would be to extend Cell, making it an Abstract class instead of an 
Interface, and have NativeCell have a NA as a field instead of extending it.  
But then we're increasing the memory overhead of a NC by almost 50% which 
directly impacts our main goal here.)

I can see reasonable alternatives to where exactly the static utility methods 
live: put them in the BufferCell classes and have the Native classes reuse them 
that way, or put them in a separate class entirely, and I'm okay with either of 
those options but I don't really see them as strictly better than the Impl 
choice (which has the advantage of encapsulating what interface specifically 
they deal with, distinct from the Buffer or Native subclasses).


 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6326) Snapshot should create manifest file

2014-04-10 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965483#comment-13965483
 ] 

Nick Bailey commented on CASSANDRA-6326:


Well each snapshot has a schema associated with it. If you drop a column family 
you need to recreate it before you can restore a snapshot to it. Presumably we 
can pull the relevant information out of the system.schema_* tables that would 
allow recreating a schema before restoring.

 Snapshot should create manifest file 
 -

 Key: CASSANDRA-6326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6326
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: trunk-6326.diff


 We should create a manifest file as part of the snapshot which should contain 
 all the stables included in the snapshot. 
 This will be very important for systems consuming this snapshot as they can 
 validate the fact that they got the complete snapshot. 
 If Cassandra crashes mid way creating a snapshot, I think it will create an 
 incomplete snapshot. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965487#comment-13965487
 ] 

Jonathan Ellis commented on CASSANDRA-6694:
---

bq. Is it essential to move everything to the separate package .data ?

If I may bikeshed a bit, data is a fairly meaningless term in the Cassandra 
context and I would prefer to name it cells instead.  Otherwise, I think it's 
a reasonable refactor.

My initial reaction was, moving things to different packages should totally be 
a separate commit but the new interfaces don't share a whole lot with the old 
classes other than the name.  So even that doesn't really bother me, but if 
Pavel or Marcus still want that to facilitate review then it's a reasonable 
request.


 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965493#comment-13965493
 ] 

Benedict commented on CASSANDRA-6694:
-

I agree data is a bit meaningless - and, in fact, I started with cells. But 
it includes DecoratedKey / RowPosition, so data became the easiest most 
encompassing term. More than open to better suggestions.

 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965502#comment-13965502
 ] 

Jonathan Ellis commented on CASSANDRA-6694:
---

Simple solution: leave DK and RP where they are. :)

 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix ABSC.SearchIterator#next() (CASSANDRA-6933 follow-up)

2014-04-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk f6671a7ac - 333986428


Fix ABSC.SearchIterator#next() (CASSANDRA-6933 follow-up)

patch by Benedict Elliott Smith; reviewed by Aleksey Yeschenko for
CASSANDRA-6933


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33398642
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33398642
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33398642

Branch: refs/heads/trunk
Commit: 333986428e6556f68c5889046d79afad8cb8e8f9
Parents: f6671a7
Author: Benedict Elliott Smith git...@sub.laerad.com
Authored: Thu Apr 10 19:21:42 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Apr 10 19:24:10 2014 +0300

--
 src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33398642/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java 
b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
index d79edd3..dcb6a37 100644
--- a/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/ArrayBackedSortedColumns.java
@@ -465,8 +465,9 @@ public class ArrayBackedSortedColumns extends ColumnFamily
 
 // optimize for runs of sequential matches, as in 
CollationController
 // checking to see if we've found the desired cells yet 
(CASSANDRA-6933)
-if (metadata.comparator.compare(name, cells[i].name()) == 0)
-return cells[i++];
+int c = metadata.comparator.compare(name, cells[i].name());
+if (c = 0)
+return c  0 ? null : cells[i++];
 
 // use range to manually force a better bsearch pivot by 
breaking it into two calls:
 // first for i..i+range, then i+range..size if necessary.



[jira] [Resolved] (CASSANDRA-6933) Optimise Read Comparison Costs in collectTimeOrderedData

2014-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6933.
--

Resolution: Fixed

Indeed. Committed.

 Optimise Read Comparison Costs in collectTimeOrderedData
 

 Key: CASSANDRA-6933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6933
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 3.0

 Attachments: 6933-v3.txt, 6933-v4.txt, 6933-v5.txt, 6933.v5fix.txt


 Introduce a new SearchIterator construct, which can be obtained from a 
 ColumnFamily, which permits efficiently iterating a subset of the cells in 
 ascending order. Essentially, it saves the previously visited position and 
 searches from there, but also tries to avoid searching the whole remaining 
 space if possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7023) Nodetool clearsnapshot should not remove all snapshots by default

2014-04-10 Thread Sucwinder Bassi (JIRA)
Sucwinder Bassi created CASSANDRA-7023:
--

 Summary: Nodetool clearsnapshot should not remove all snapshots by 
default
 Key: CASSANDRA-7023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7023
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Sucwinder Bassi
Priority: Minor


Running a nodetool clearsnapshot will remove all snapshot files by default. As 
this is removing data this shouldn't just go away and just remove all 
snapshots. If you want to remove all snapshots there should be a force option. 
A list option showing snapshots available would also be helpful which could be 
piped into the nodetool clearsnapshot command.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965530#comment-13965530
 ] 

Tyler Hobbs commented on CASSANDRA-7000:


bq.  While doing so, validation compaction does not acquire reference since 
SSTables are from snapshot.

Opening the SSTableReader implicitly involves acquiring a reference (they 
always start with a reference count of 1)

bq. Shouldn't SSTableReader be closable regardless of reference acuired?

I don't think so, but perhaps I'm missing a case where it would make sense to 
close an SSTable with a non-zero reference count.  I believe the correct patch 
would be to release the reference on the SSTableReaders after the validation 
compaction has completed instead of directly calling close().

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7023) Nodetool clearsnapshot should not remove all snapshots by default

2014-04-10 Thread Sucwinder Bassi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sucwinder Bassi updated CASSANDRA-7023:
---

Issue Type: New Feature  (was: Bug)

 Nodetool clearsnapshot should not remove all snapshots by default
 -

 Key: CASSANDRA-7023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7023
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Sucwinder Bassi
Priority: Minor

 Running a nodetool clearsnapshot will remove all snapshot files by default. 
 As this is removing data this shouldn't just go away and just remove all 
 snapshots. If you want to remove all snapshots there should be a force 
 option. A list option showing snapshots available would also be helpful which 
 could be piped into the nodetool clearsnapshot command.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7023) Nodetool clearsnapshot should not remove all snapshots by default

2014-04-10 Thread Sucwinder Bassi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965538#comment-13965538
 ] 

Sucwinder Bassi commented on CASSANDRA-7023:


Looks like the list snap shot option has been suggested:

https://issues.apache.org/jira/browse/CASSANDRA-5742

If the listsnapshot output could be piped into clearsnapshot that would be 
helpful.

 Nodetool clearsnapshot should not remove all snapshots by default
 -

 Key: CASSANDRA-7023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7023
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Sucwinder Bassi
Priority: Minor

 Running a nodetool clearsnapshot will remove all snapshot files by default. 
 As this is removing data this shouldn't just go away and just remove all 
 snapshots. If you want to remove all snapshots there should be a force 
 option. A list option showing snapshots available would also be helpful which 
 could be piped into the nodetool clearsnapshot command.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7002) concurrent_schema_changes_test snapshot_test dtest needs to account for hashed data dirs in 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7002:
--

Assignee: Brandon Williams
Priority: Blocker  (was: Major)

 concurrent_schema_changes_test snapshot_test dtest needs to account for 
 hashed data dirs in 2.1
 ---

 Key: CASSANDRA-7002
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7002
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Brandon Williams
Priority: Blocker

 {noformat}
 ==
 ERROR: snapshot_test 
 (concurrent_schema_changes_test.TestConcurrentSchemaChanges)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/concurrent_schema_changes_test.py, 
 line 299, in snapshot_test
 for f in os.listdir(dirr):
 OSError: [Errno 2] No such file or directory: 
 '/tmp/dtest-VZwotc/test/node1/data/ks_ns2/cf_ns2'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7005) repair_test dtest fails on 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7005:
--

Fix Version/s: 2.1 beta2
 Assignee: Yuki Morishita
 Priority: Blocker  (was: Major)

 repair_test dtest fails on 2.1
 --

 Key: CASSANDRA-7005
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7005
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 repair_test.py
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 simple_repair_order_preserving_test (repair_test.TestRepair) ... cluster ccm 
 directory: /tmp/dtest-BVfye7
 Starting cluster..
 Inserting data...
 Checking data on node3...
 Checking data on node1...
 Checking data on node2...
 starting repair...
 [2014-04-08 13:44:31,424] Starting repair command #1, repairing 3 ranges for 
 keyspace ks (seq=true, full=true)
 [2014-04-08 13:44:32,748] Repair session d262e390-bf4d-11e3-a482-75998baadb41 
 for range (00,0113427455640312821154458202477256070484] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #d262e390-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (00,0113427455640312821154458202477256070484]] Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,751] Repair session d2b98f10-bf4d-11e3-a482-75998baadb41 
 for range 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]
  failed with error org.apache.cassandra.exceptions.RepairException: [repair 
 #d2b98f10-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]]
  Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,753] Repair session d2dca770-bf4d-11e3-a482-75998baadb41 
 for range (56713727820156410577229101238628035242,00] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #d2dca770-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (56713727820156410577229101238628035242,00]] Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,753] Repair command #1 finished
 [2014-04-08 13:44:32,770] Nothing to repair for keyspace 'system'
 [2014-04-08 13:44:32,783] Starting repair command #2, repairing 2 ranges for 
 keyspace system_traces (seq=true, full=true)
 [2014-04-08 13:44:34,635] Repair session d3310900-bf4d-11e3-a482-75998baadb41 
 for range 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]
  finished
 [2014-04-08 13:44:34,640] Repair session d3f80280-bf4d-11e3-a482-75998baadb41 
 for range (56713727820156410577229101238628035242,00] finished
 [2014-04-08 13:44:34,640] Repair command #2 finished
 Repair time: 4.63053512573
 FAIL
 ERROR
 simple_repair_test (repair_test.TestRepair) ... cluster ccm directory: 
 /tmp/dtest-_L5lTP
 Starting cluster..
 Inserting data...
 Checking data on node3...
 Checking data on node1...
 Checking data on node2...
 starting repair...
 [2014-04-08 13:47:14,109] Starting repair command #1, repairing 3 ranges for 
 keyspace ks (seq=true, full=true)
 [2014-04-08 13:47:15,291] Repair session 335a5840-bf4e-11e3-b691-75998baadb41 
 for range (-3074457345618258603,3074457345618258602] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #335a5840-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (-3074457345618258603,3074457345618258602]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,292] Repair session 33ad0c20-bf4e-11e3-b691-75998baadb41 
 for range (-9223372036854775808,-3074457345618258603] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #33ad0c20-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,295] Repair session 33e978e0-bf4e-11e3-b691-75998baadb41 
 for range (3074457345618258602,-9223372036854775808] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #33e978e0-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (3074457345618258602,-9223372036854775808]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,295] Repair command #1 finished
 [2014-04-08 13:47:15,307] Nothing to repair for keyspace 'system'
 [2014-04-08 13:47:15,322] Starting repair command #2, repairing 2 ranges for 
 keyspace system_traces (seq=true, full=true)
 [2014-04-08 13:47:15,983] Repair session 3412f9e0-bf4e-11e3-b691-75998baadb41 
 for range (-3074457345618258603,3074457345618258602] finished
 [2014-04-08 13:47:15,988] Repair session 345d9770-bf4e-11e3-b691-75998baadb41 
 for range (3074457345618258602,-9223372036854775808] finished
 [2014-04-08 13:47:15,988] Repair command #2 finished
 Repair time: 

[jira] [Updated] (CASSANDRA-7006) secondary_indexes_test test_6924 dtest fails on 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7006:
--

Fix Version/s: 2.1 beta2
 Assignee: Sam Tunnicliffe
 Priority: Blocker  (was: Major)

 secondary_indexes_test test_6924 dtest fails on 2.1
 ---

 Key: CASSANDRA-7006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7006
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Sam Tunnicliffe
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 ==
 FAIL: test_6924 (secondary_indexes_test.TestSecondaryIndexes)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/secondary_indexes_test.py, line 
 135, in test_6924
 self.assertEqual(count,10)
 AssertionError: 7 != 10
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7009) topology_test dtest fails in 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7009:
--

Fix Version/s: 2.1 beta2
 Assignee: Brandon Williams
 Priority: Blocker  (was: Major)

 topology_test dtest fails in 2.1
 

 Key: CASSANDRA-7009
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7009
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Brandon Williams
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 $ export MAX_HEAP_SIZE=1G; export HEAP_NEWSIZE=256M; PRINT_DEBUG=true 
 nosetests --nocapture --nologcapture --verbosity=3 topology_test.py
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 decomission_test (topology_test.TestTopology) ... cluster ccm directory: 
 /tmp/dtest-UhiFiQ
 FAIL
 move_single_node_test (topology_test.TestTopology) ... cluster ccm directory: 
 /tmp/dtest-x1q7pp
 ok
 movement_test (topology_test.TestTopology) ... cluster ccm directory: 
 /tmp/dtest-t6AuXA
 error: For input string: \-9223372036854775808
 -- StackTrace --
 java.io.IOException: For input string: \-9223372036854775808
 at 
 org.apache.cassandra.service.StorageService.move(StorageService.java:3044)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 error: For input string: \-3074457345618258603
 -- StackTrace --
 java.io.IOException: For input string: \-3074457345618258603
 at 
 org.apache.cassandra.service.StorageService.move(StorageService.java:3044)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 

[jira] [Commented] (CASSANDRA-7008) upgrade_supercolumns_test dtest failing in 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965553#comment-13965553
 ] 

Jonathan Ellis commented on CASSANDRA-7008:
---

Close this and track on the dtest side?

 upgrade_supercolumns_test dtest failing in 2.1
 --

 Key: CASSANDRA-7008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7008
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Ryan McGuire

 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 upgrade_supercolumns_test.py 
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 upgrade_with_index_creation_test (upgrade_supercolumns_test.TestSCUpgrade) 
 ... cluster ccm directory: /tmp/dtest-UWLi7s
 ERROR
 ==
 ERROR: upgrade_with_index_creation_test 
 (upgrade_supercolumns_test.TestSCUpgrade)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/upgrade_supercolumns_test.py, line 
 37, in upgrade_with_index_creation_test
 node1.start(wait_other_notice=True)
   File /home/mshuler/git/ccm/ccmlib/node.py, line 427, in start
 raise NodeError(Error starting node %s % self.name, process)
 NodeError: Error starting node node1
 --
 Ran 1 test in 69.320s
 FAILED (errors=1)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7008) upgrade_supercolumns_test dtest failing in 2.1

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7008:
--

Assignee: Ryan McGuire

 upgrade_supercolumns_test dtest failing in 2.1
 --

 Key: CASSANDRA-7008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7008
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Ryan McGuire

 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 upgrade_supercolumns_test.py 
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 upgrade_with_index_creation_test (upgrade_supercolumns_test.TestSCUpgrade) 
 ... cluster ccm directory: /tmp/dtest-UWLi7s
 ERROR
 ==
 ERROR: upgrade_with_index_creation_test 
 (upgrade_supercolumns_test.TestSCUpgrade)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/upgrade_supercolumns_test.py, line 
 37, in upgrade_with_index_creation_test
 node1.start(wait_other_notice=True)
   File /home/mshuler/git/ccm/ccmlib/node.py, line 427, in start
 raise NodeError(Error starting node %s % self.name, process)
 NodeError: Error starting node node1
 --
 Ran 1 test in 69.320s
 FAILED (errors=1)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965562#comment-13965562
 ] 

Benedict commented on CASSANDRA-6694:
-

Well, the only fly in that ointment is that they have Buffer and Native 
implementations also, and the DataAllocator allocates them as well as cells. 
So to separate them seems a bit strange - but I'm not too fussed tbh.

 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7008) upgrade_supercolumns_test dtest failing in 2.1

2014-04-10 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire resolved CASSANDRA-7008.
-

Resolution: Not a Problem

closed; [dtest issue|https://github.com/riptano/cassandra-dtest/issues/38]

 upgrade_supercolumns_test dtest failing in 2.1
 --

 Key: CASSANDRA-7008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7008
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Ryan McGuire

 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 upgrade_supercolumns_test.py 
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 upgrade_with_index_creation_test (upgrade_supercolumns_test.TestSCUpgrade) 
 ... cluster ccm directory: /tmp/dtest-UWLi7s
 ERROR
 ==
 ERROR: upgrade_with_index_creation_test 
 (upgrade_supercolumns_test.TestSCUpgrade)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/upgrade_supercolumns_test.py, line 
 37, in upgrade_with_index_creation_test
 node1.start(wait_other_notice=True)
   File /home/mshuler/git/ccm/ccmlib/node.py, line 427, in start
 raise NodeError(Error starting node %s % self.name, process)
 NodeError: Error starting node node1
 --
 Ran 1 test in 69.320s
 FAILED (errors=1)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7023) Nodetool clearsnapshot should not remove all snapshots by default

2014-04-10 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965570#comment-13965570
 ] 

Brandon Williams commented on CASSANDRA-7023:
-

{noformat}
# for x in `bin/nodetool listsnapshots | grep Keyspace1 | cut -d' ' -f1`; do 
bin/nodetool clearsnapshot -t $x; done
Requested clearing snapshot(s) for [all keyspaces] with snapshot name 
[1397150620696]
#
{noformat}

 Nodetool clearsnapshot should not remove all snapshots by default
 -

 Key: CASSANDRA-7023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7023
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Sucwinder Bassi
Priority: Minor

 Running a nodetool clearsnapshot will remove all snapshot files by default. 
 As this is removing data this shouldn't just go away and just remove all 
 snapshots. If you want to remove all snapshots there should be a force 
 option. A list option showing snapshots available would also be helpful which 
 could be piped into the nodetool clearsnapshot command.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2014-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reopened CASSANDRA-6405:
--

Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey
 Attachments: threaddump.txt


 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 Column Family: CommentTree
 SSTable count: 30
 SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
 

[jira] [Resolved] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2014-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6405.
--

   Resolution: Fixed
Fix Version/s: 2.1 beta2
Reproduced In: 1.2.11, 1.1.7, 1.0.12  (was: 1.0.12, 1.1.7, 1.2.11)

CASSANDRA-6506 has been delayed until 3.0, but this issues is now actually 
resolved in 2.1 by the combination of new memtable code and various counters++ 
commits (including, but not limited to, part of CASSANDRA-6506 and 
CASSANDRA-6953).

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey
 Fix For: 2.1 beta2

 Attachments: threaddump.txt


 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   

[jira] [Updated] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-10 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-7000:
--

Attachment: 7000-2.1-v2.txt

[~thobbs] I think you are right. Attached v2 to fix validation compaction to 
release reference on snapshot repair.

I also added message to AssertinoError on SSTR#tidy, but explicitly throwing 
exception(IllegalStateException for example) may be nicer.

[~usrbincc] Can you test if this fixes your test?

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965749#comment-13965749
 ] 

Aleksey Yeschenko commented on CASSANDRA-6831:
--

[~mishail] So, do you want to delay review until then or not? (if yes, cancel 
the patch, please).

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17

 Attachments: cassandra-1.2-6831.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965760#comment-13965760
 ] 

Mikhail Stepura commented on CASSANDRA-6831:


The patch is for 1.2 only, and it does fix the described problem. The patch is 
not applicable for 2.0/2.1. 

So, yes, the patch can be reviewed for 1.2, and we can track changes for 
2.0/2.1 under different JIRA issue. 

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17

 Attachments: cassandra-1.2-6831.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965772#comment-13965772
 ] 

Aleksey Yeschenko commented on CASSANDRA-6831:
--

TBH I think the right way to fix this is to reject any attempts from CLI/Thrift 
in general if a table has non-default aliases.

You should not be mixing schema changes from CLI and cqlsh, should stick to one.

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17

 Attachments: cassandra-1.2-6831.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965775#comment-13965775
 ] 

Mikhail Stepura commented on CASSANDRA-6831:


That's the easiest way to fix the problem

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17

 Attachments: cassandra-1.2-6831.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965774#comment-13965774
 ] 

Aleksey Yeschenko commented on CASSANDRA-6831:
--

That is, treat these tables (that have CQL metadata with them) as CQL3 tables, 
even if they have WITH COMPACT STORAGE, and extend CASSANDRA-6370 to them.

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17

 Attachments: cassandra-1.2-6831.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[Cassandra Wiki] Update of HowToContribute by TylerHobbs

2014-04-10 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The HowToContribute page has been changed by TylerHobbs:
https://wiki.apache.org/cassandra/HowToContribute?action=diffrev1=52rev2=53

Comment:
Show how to run a specific set of tests

* Verify that you follow Cassandra's CodeStyle.
* Verify that your change works by adding a unit test.
* Make sure all tests pass by running ant test in the project directory.
+ * You can run specific tests like so: `ant test 
-Dtest.name=SSTableReaderTest``
* For testing multi-node behavior, https://github.com/pcmanus/ccm is useful
   1. When you're happy with the result create a patch:
* git add any new or modified file


[jira] [Commented] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965794#comment-13965794
 ] 

Tyler Hobbs commented on CASSANDRA-7000:


+1 on the patch.

bq. explicitly throwing exception(IllegalStateException for example) may be 
nicer

Sounds reasonable to me.

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7024) Create snapshot selectively during sequential repair

2014-04-10 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-7024:
--

Summary: Create snapshot selectively during sequential repair   (was: 
Snapshot repair )

 Create snapshot selectively during sequential repair 
 -

 Key: CASSANDRA-7024
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7024
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 
 0001-Only-snapshot-SSTables-related-to-validating-range.patch


 When doing snapshot repair, right now we snapshot all SSTables, open them and 
 use just part of them for building MerkleTree.
 Instead, we can snapshot and use only SSTables that are necessary to build 
 MerkleTree of interested range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7024) Snapshot repair

2014-04-10 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-7024:
-

 Summary: Snapshot repair 
 Key: CASSANDRA-7024
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7024
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2
 Attachments: 
0001-Only-snapshot-SSTables-related-to-validating-range.patch

When doing snapshot repair, right now we snapshot all SSTables, open them and 
use just part of them for building MerkleTree.

Instead, we can snapshot and use only SSTables that are necessary to build 
MerkleTree of interested range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7000:


Attachment: 7000.supplement.txt

Not sure if we want to tack this onto the same ticket, but the same basic 
problem seems to occur elsewhere as well: some referrers of sstablereader do 
not acquire a reference prior to accessing resources that will be closed when 
ref count hits zero. I found specifically CFS.estimatedKeysForRange, 
CFS.keySamples, couldn't see any others.

This latter is the cause of the currently failing unit test for 2.1

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, 7000.supplement.txt, 
 sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/3] git commit: Fix incorrect logging output

2014-04-10 Thread yukim
Fix incorrect logging output

Change in db9bc6929657fac40cf25af94bf919f1b213655a broke logging not to
output number of ranges out of sync.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66a6990a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66a6990a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66a6990a

Branch: refs/heads/trunk
Commit: 66a6990aa076bbcdebee952f47c95ccdad735dbc
Parents: 8a5b90e
Author: Yuki Morishita yu...@apache.org
Authored: Thu Apr 10 15:51:33 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Apr 10 15:51:33 2014 -0500

--
 src/java/org/apache/cassandra/repair/Differencer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/66a6990a/src/java/org/apache/cassandra/repair/Differencer.java
--
diff --git a/src/java/org/apache/cassandra/repair/Differencer.java 
b/src/java/org/apache/cassandra/repair/Differencer.java
index 470c1ae..214d2c9 100644
--- a/src/java/org/apache/cassandra/repair/Differencer.java
+++ b/src/java/org/apache/cassandra/repair/Differencer.java
@@ -71,7 +71,7 @@ public class Differencer implements Runnable
 }
 
 // non-0 difference: perform streaming repair
-logger.info(format, have {} range(s) out of sync, 
differences.size());
+logger.info(String.format(format, have  + differences.size() +  
range(s) out of sync));
 performStreamingRepair();
 }
 



[1/3] git commit: Fix incorrect logging output

2014-04-10 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 8a5b90ede - 66a6990aa
  refs/heads/trunk 333986428 - 471f5cc34


Fix incorrect logging output

Change in db9bc6929657fac40cf25af94bf919f1b213655a broke logging not to
output number of ranges out of sync.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66a6990a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66a6990a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66a6990a

Branch: refs/heads/cassandra-2.1
Commit: 66a6990aa076bbcdebee952f47c95ccdad735dbc
Parents: 8a5b90e
Author: Yuki Morishita yu...@apache.org
Authored: Thu Apr 10 15:51:33 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Apr 10 15:51:33 2014 -0500

--
 src/java/org/apache/cassandra/repair/Differencer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/66a6990a/src/java/org/apache/cassandra/repair/Differencer.java
--
diff --git a/src/java/org/apache/cassandra/repair/Differencer.java 
b/src/java/org/apache/cassandra/repair/Differencer.java
index 470c1ae..214d2c9 100644
--- a/src/java/org/apache/cassandra/repair/Differencer.java
+++ b/src/java/org/apache/cassandra/repair/Differencer.java
@@ -71,7 +71,7 @@ public class Differencer implements Runnable
 }
 
 // non-0 difference: perform streaming repair
-logger.info(format, have {} range(s) out of sync, 
differences.size());
+logger.info(String.format(format, have  + differences.size() +  
range(s) out of sync));
 performStreamingRepair();
 }
 



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-10 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/471f5cc3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/471f5cc3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/471f5cc3

Branch: refs/heads/trunk
Commit: 471f5cc34c99f1f6dc42848446c2739390d7cc7a
Parents: 3339864 66a6990
Author: Yuki Morishita yu...@apache.org
Authored: Thu Apr 10 15:53:39 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Apr 10 15:53:39 2014 -0500

--
 src/java/org/apache/cassandra/repair/Differencer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[jira] [Updated] (CASSANDRA-7024) Create snapshot selectively during sequential repair

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7024:
--

Reviewer: Joshua McKenzie

[~JoshuaMcKenzie] to review

 Create snapshot selectively during sequential repair 
 -

 Key: CASSANDRA-7024
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7024
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 
 0001-Only-snapshot-SSTables-related-to-validating-range.patch


 When doing snapshot repair, right now we snapshot all SSTables, open them and 
 use just part of them for building MerkleTree.
 Instead, we can snapshot and use only SSTables that are necessary to build 
 MerkleTree of interested range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7024) Create snapshot selectively during sequential repair

2014-04-10 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965932#comment-13965932
 ] 

Joshua McKenzie commented on CASSANDRA-7024:


Do we need to keep SnapshotCommand and SnapshotVerbHandler in the code-base 
after this change?  Other than that - looks reasonable to me.

 Create snapshot selectively during sequential repair 
 -

 Key: CASSANDRA-7024
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7024
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 
 0001-Only-snapshot-SSTables-related-to-validating-range.patch


 When doing snapshot repair, right now we snapshot all SSTables, open them and 
 use just part of them for building MerkleTree.
 Instead, we can snapshot and use only SSTables that are necessary to build 
 MerkleTree of interested range.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6572) Workload recording / playback

2014-04-10 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6572:
--

Attachment: 6572-trunk.diff

The process goes like this:
# We enable recording via JMX SS#enableQueryRecording
# Insert n number of queries
# Replay queries to a new cluster using tools/bin/workloadreplayer

Running through an example:

# JMX Call to SS#enableQueryRecording where we supply parameters of 5 for 5MB 
log limit, 4 for record ever 1/4 queries and finally 
{{/var/lib/cassadra/querylog}} as the directory for the logs
# Insert 100k rows
This should result in 2 query logs, one of which is 5mb and has been renamed to 
store a timestamp in its name, the other will be named QueryLog.log. Between 
the two logs there should be 25k queries.
# Replaying the logs is done via the replay tool (workloadreplayer) where we 
first supply the directory of the query logs and then various flags ([see git 
branch 
here|https://github.com/lyubent/cassandra/commit/526672982870bec49e5b234e8d11ef5e1f17cd28#diff-91cd490dd94b74e10ade733f61dc6ab7R207])
 e.g:
{{./tools/bin/workloadreplayer /Users/lyubentodorov/Desktop/Log/ -t 100}}

Concerns:
Two synchronize blocks (one in 
[QueryProcessor#maybeLogQuery|https://github.com/lyubent/cassandra/commit/526672982870bec49e5b234e8d11ef5e1f17cd28#diff-9c19942eca6c858baad84e942b3c7e21R402]
 and the other in 
[QueryRecorder#append|https://github.com/lyubent/cassandra/commit/526672982870bec49e5b234e8d11ef5e1f17cd28#diff-7d2a64c8ee2a2b78b3f1921e673b423eR73])
 have been added on the read path, but since these blocks will only be hit when 
query logging is enabled it shouldn't hinder performance where it matters most. 
I've used the thrift client so I'm not sure if queries routing will be optimal.

Feature branch [here|https://github.com/lyubent/cassandra/tree/6572], also 
attaching a patch for trunk. I'll patch this for cassandra-2.0 tomorrow :)

 Workload recording / playback
 -

 Key: CASSANDRA-6572
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Tools
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
 Fix For: 2.0.8

 Attachments: 6572-trunk.diff


 Write sample mode gets us part way to testing new versions against a real 
 world workload, but we need an easy way to test the query side as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-04-10 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965957#comment-13965957
 ] 

Lyuben Todorov commented on CASSANDRA-5483:
---

[~usrbincc] Can you post the squashed rebase? Would really appreciate it!

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, ccm-repair-test, 
 cqlsh-left-justify-text-columns.patch, prerepair-vs-postbuggedrepair.diff, 
 test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6974) Replaying archived commitlogs isn't working

2014-04-10 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965958#comment-13965958
 ] 

Ryan McGuire commented on CASSANDRA-6974:
-

[~benedict] I wanted to test your suggestion of using drain() to ensure that 
100% of the commitlogs get archived, which does not appear to work, I've 
modified my script to test it both ways:

 * test_archive_commitlog: Does not copy the active commitlogs, comes up short 
on the rows it should have on restore.
 * test_archive_commitlog_with_active_commitlog: Copies the active commitlogs 
before restore, has the right number of rows on restore.

(Tested on cassadra-2.0)

 Replaying archived commitlogs isn't working
 ---

 Key: CASSANDRA-6974
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6974
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: 2.0.system.log, 2.1.system.log


 I have a test for restoring archived commitlogs, which is not working in 2.1 
 HEAD.  My commitlogs consist of 30,000 inserts, but system.log indicates 
 there were only 2 mutations replayed:
 {code}
 INFO  [main] 2014-04-02 11:49:54,173 CommitLog.java:115 - Log replay 
 complete, 2 replayed mutations
 {code}
 There are several warnings in the logs about bad headers and invalid CRCs: 
 {code}
 WARN  [main] 2014-04-02 11:49:54,156 CommitLogReplayer.java:138 - Encountered 
 bad header at position 0 of commit log /tmp/dtest
 -mZIlPE/test/node1/commitlogs/CommitLog-4-1396453793570.log, with invalid 
 CRC. The end of segment marker should be zero.
 {code}
 compare that to the same test run on 2.0, where it replayed many more 
 mutations:
 {code}
  INFO [main] 2014-04-02 11:49:04,673 CommitLog.java (line 132) Log replay 
 complete, 35960 replayed mutations
 {code}
 I'll attach the system logs for reference.
 [Here is the dtest to reproduce 
 this|https://github.com/riptano/cassandra-dtest/blob/master/snapshot_test.py#L75]
  - (This currently relies on the fix for snapshots available in 
 CASSANDRA-6965.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5547) Multi-threaded scrub

2014-04-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965973#comment-13965973
 ] 

Jonathan Ellis commented on CASSANDRA-5547:
---

Sorry, I want to review this but I'm overcommitted.  Can you take review, 
Marcus?

 Multi-threaded scrub
 

 Key: CASSANDRA-5547
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5547
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benjamin Coverston
Assignee: Russell Alexander Spitzer
  Labels: lhf
 Fix For: 2.0.8

 Attachments: cassandra-2.0-5547.txt


 Scrub (especially offline) could benefit from being multi-threaded, 
 especially in the case where the SSTables are compressed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5547) Multi-threaded scrub

2014-04-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5547:
--

Reviewer: Marcus Eriksson

 Multi-threaded scrub
 

 Key: CASSANDRA-5547
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5547
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benjamin Coverston
Assignee: Russell Alexander Spitzer
  Labels: lhf
 Fix For: 2.0.8

 Attachments: cassandra-2.0-5547.txt


 Scrub (especially offline) could benefit from being multi-threaded, 
 especially in the case where the SSTables are compressed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


svn commit: r1586526 - in /cassandra/site: publish/index.html src/content/index.html

2014-04-10 Thread jbellis
Author: jbellis
Date: Fri Apr 11 01:31:51 2014
New Revision: 1586526

URL: http://svn.apache.org/r1586526
Log:
update user highlights

Modified:
cassandra/site/publish/index.html
cassandra/site/src/content/index.html

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1586526r1=1586525r2=1586526view=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Fri Apr 11 01:31:51 2014
@@ -93,13 +93,18 @@
 bProven/b
 p
 Cassandra is in use at 
-a 
href=http://www.slideshare.net/adrianco/migrating-netflix-from-oracle-to-global-cassandra;Netflix/a,
-a 
href=http://www.slideshare.net/jaykumarpatel/cassandra-at-ebay-13920376;eBay/a,
-a 
href=http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011;Twitter/a,
 
-a 
href=http://www.slideshare.net/eonnen/from-100s-to-100s-of-millions/;Urban 
Airship/a,
 a 
href=http://www.slideshare.net/daveconnors/cassandra-puppet-scaling-data-at-15-per-month;Constant
 Contact/a,
-a 
href=http://blog.reddit.com/2010/03/she-who-entangles-men.html;Reddit/a, 
-Cisco, OpenX, Digg, CloudKick, Ooyala, and a 
href=http://www.datastax.com/cassandrausers;more companies/a that have 
large, active data sets.  The largest known Cassandra cluster has over 300 TB 
of data in over 400 machines.
+a 
href=http://planetcassandra.org/blog/post/cassandra-at-cern-large-hadron-collider/;CERN/a,
+a 
href=http://www.slideshare.net/planetcassandra/nyc-tech-day-using-cassandra-for-dvr-scheduling-at-comcast;Comcast/a,
+a 
href=http://www.slideshare.net/jaykumarpatel/cassandra-at-ebay-13920376;eBay/a,
+a 
href=http://planetcassandra.org/blog/post/analytics-at-github-with-apache-cassandra/;GitHub/a,
+a 
href=http://planetcassandra.org/blog/post/godaddy-worlds-largest-domain-name-registrar-and-web-host-provider-utilizes-cassandra-for-replication-and-scalability/;GoDaddy/a,
+a 
href=http://planetcassandra.org/blog/post/cassandra-used-to-build-scalable-and-highly-available-systems-at-hulu-streaming-content-to-over-5-million-subscribers/;Hulu/a,
+a 
href=http://planetcassandra.org/blog/post/instagram-making-the-switch-to-cassandra-from-redis-75-instasavings/;Instagram/a,
 
+a 
href=http://www.slideshare.net/planetcassandra/3-mohit-anchlia;Intuit/a,
+a 
href=http://www.slideshare.net/adrianco/migrating-netflix-from-oracle-to-global-cassandra;Netflix/a,
+a 
href=http://planetcassandra.org/blog/post/reddit-upvotes-apache-cassandras-horizontal-scaling-managing-1700-votes-daily/;Reddit/a,
 
+a 
href=http://planetcassandra.org/blog/post/make-it-rain-apache-cassandra-at-the-weather-channel-for-severe-weather-alerts/;The
 Weather Channel/a, and a href=http://planetcassandra.org/companies/;over 
1500 more companies/a that have large, active data sets.  The largest known 
Cassandra cluster has over 300 TB of data in over 400 machines.
 /p
   /li
   li

Modified: cassandra/site/src/content/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/index.html?rev=1586526r1=1586525r2=1586526view=diff
==
--- cassandra/site/src/content/index.html (original)
+++ cassandra/site/src/content/index.html Fri Apr 11 01:31:51 2014
@@ -39,13 +39,18 @@
 bProven/b
 p
 Cassandra is in use at 
-a 
href=http://www.slideshare.net/adrianco/migrating-netflix-from-oracle-to-global-cassandra;Netflix/a,
-a 
href=http://www.slideshare.net/jaykumarpatel/cassandra-at-ebay-13920376;eBay/a,
-a 
href=http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011;Twitter/a,
 
-a 
href=http://www.slideshare.net/eonnen/from-100s-to-100s-of-millions/;Urban 
Airship/a,
 a 
href=http://www.slideshare.net/daveconnors/cassandra-puppet-scaling-data-at-15-per-month;Constant
 Contact/a,
-a 
href=http://blog.reddit.com/2010/03/she-who-entangles-men.html;Reddit/a, 
-Cisco, OpenX, Digg, CloudKick, Ooyala, and a 
href=http://www.datastax.com/cassandrausers;more companies/a that have 
large, active data sets.  The largest known Cassandra cluster has over 300 TB 
of data in over 400 machines.
+a 
href=http://planetcassandra.org/blog/post/cassandra-at-cern-large-hadron-collider/;CERN/a,
+a 
href=http://www.slideshare.net/planetcassandra/nyc-tech-day-using-cassandra-for-dvr-scheduling-at-comcast;Comcast/a,
+a 
href=http://www.slideshare.net/jaykumarpatel/cassandra-at-ebay-13920376;eBay/a,
+a 
href=http://planetcassandra.org/blog/post/analytics-at-github-with-apache-cassandra/;GitHub/a,
+a 

[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-10 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Attachment: (was: cassandra-1.2-6831.patch)

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6996) Setting severity via JMX broken

2014-04-10 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-6996:
-

Attachment: 0001-CASSANDRA-6996.patch

The problem is that severity now based on the IOWait (if unix based), but the 
DES sets the compaction severity which is ignored. Attached patch adds 
manual_severity variable to override both... Thanks!

 Setting severity via JMX broken
 ---

 Key: CASSANDRA-6996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6996
 Project: Cassandra
  Issue Type: Bug
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-6996.patch


 Looks like setting the Severity attribute in the DynamicEndpointSnitch via 
 JMX is a no-op.



--
This message was sent by Atlassian JIRA
(v6.2#6252)