[jira] [Updated] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6378:
---

Attachment: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851504#comment-13851504
 ] 

Sam Tunnicliffe commented on CASSANDRA-6378:


Sorry, that's missed that when refactoring. Attached updated patch with the 
extraneous parameter removed.

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851504#comment-13851504
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-6378 at 12/18/13 9:02 AM:
--

Sorry, missed that when refactoring. Attached updated patch with the extraneous 
parameter removed.


was (Author: beobal):
Sorry, that's missed that when refactoring. Attached updated patch with the 
extraneous parameter removed.

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6378:
---

Attachment: (was: 
0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch)

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6421) Add bash completion to nodetool

2013-12-18 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851480#comment-13851480
 ] 

Cyril Scetbon edited comment on CASSANDRA-6421 at 12/18/13 9:36 AM:


Sorry I didn't see the comment :(

[~lyubent] that's what you should get. I'm on OSX and using bash completion 1.3 
from Homebrew :
{code}
$ brew list bash-completion
/usr/local/Cellar/bash-completion/1.3/etc/bash_completion.d/ (180 files)
/usr/local/Cellar/bash-completion/1.3/etc/profile.d/bash_completion.sh
/usr/local/Cellar/bash-completion/1.3/etc/bash_completion
{code}
have is a bash-completion function. Tell me the ./nodetool is not executing the 
bash completion script :)
To use it, you have to 
- place the file in your bash_completion.d directory (in my case ) :
{code}
$ ls /usr/local/etc/bash_completion.d/node*
/usr/local/etc/bash_completion.d/nodetool
{code}
- add the following (add absolute path if you don't use hombrew) in your 
~/.bash_profile :
{code}
if [ -f `brew --prefix`/etc/bash_completion ]; then
. `brew --prefix`/etc/bash_completion
fi
{code}
- start a new bash session and try :
{code}
nodetool cfh[TAB]
nodetool cfhistograms [TAB][TAB]
pns_fr system system_authsystem_traces  test
nodetool cfhistograms system 
HintsColumnFamily  Migrations batchlog   
peer_eventsschema_columnfamilies  
IndexInfo  NodeIdInfo hints  peers  
schema_columns 
LocationInfo   Schema local  
range_xfersschema_keyspaces
{code}

As you see after cfh has been completed to cfhistograms if you add 2 more 
\[TAB\] you get the name of keyspaces, and if you add the name of the keyspace 
system and 2 more \[TAB\] you get names of column families :)
The first word is the nodetool script from cassandra, not the bash completion 
script 


was (Author: cscetbon):
Sorry I didn't see the comment :(

[~lyubent] that's what you should get. I'm on OSX and using bash completion 1.3 
from Homebrew :
{code}
$ brew list bash-completion
/usr/local/Cellar/bash-completion/1.3/etc/bash_completion.d/ (180 files)
/usr/local/Cellar/bash-completion/1.3/etc/profile.d/bash_completion.sh
/usr/local/Cellar/bash-completion/1.3/etc/bash_completion
{code}
have is a bash-completion function. Tell me the ./nodetool is not executing the 
bash completion script :)
To use it, you have to 
- place the file in your bash_completion.d directory (in my case ) :
{code}
$ ls /usr/local/etc/bash_completion.d/node*
/usr/local/etc/bash_completion.d/nodetool
{code}
- add the following (add absolute path if you don't use hombrew) in your 
~/.bash_profile :
{code}
if [ -f `brew --prefix`/etc/bash_completion ]; then
. `brew --prefix`/etc/bash_completion
fi
{code}
- start a new bash session and try :
{code}
nodetool cfh[TAB]
nodetool cfhistograms [TAB][TAB]
pns_fr system system_authsystem_traces  test
nodetool cfhistograms system 
HintsColumnFamily  Migrations batchlog   
peer_eventsschema_columnfamilies  
IndexInfo  NodeIdInfo hints  peers  
schema_columns 
LocationInfo   Schema local  
range_xfersschema_keyspaces
{code}
As you see after cfh has been completed to cfhistograms if you add 2 more 
\[TAB\] you get the name of keyspaces, and if you add the name of the keyspace 
system and 2 more \[TAB\] you get names of column families :)
The first word is the nodetool script from cassandra, not the bash completion 
script 

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 2.0.4


 You can find the patch from my commit here :
 https://github.com/cscetbon/cassandra/commit/07a10b99778f14362ac05c70269c108870555bf3.patch
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6471) Executing a prepared CREATE KEYSPACE multiple times doesn't work

2013-12-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6471:


Fix Version/s: (was: 1.2.13)
   1.2.14

 Executing a prepared CREATE KEYSPACE multiple times doesn't work
 

 Key: CASSANDRA-6471
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6471
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 1.2.14

 Attachments: 6471.txt


 See user reports on the java driver JIRA: 
 https://datastax-oss.atlassian.net/browse/JAVA-223. Preparing CREATE KEYSPACE 
 queries is not particularly useful but there is no reason for it to be broken.
 The reason is that calling KSPropDef/CFPropDef.validate() methods are not 
 idempotent. Attaching simple patch to fix.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


git commit: Allow executing CREATE statement multiple times

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 1b4c9b45c - 079f1e811


Allow executing CREATE statement multiple times

patch by slebresne; reviewed by jbellis for CASSANDRA-6471


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/079f1e81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/079f1e81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/079f1e81

Branch: refs/heads/cassandra-1.2
Commit: 079f1e81166579c5da0bdde76be7c9201d2e1711
Parents: 1b4c9b4
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:13:54 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:13:54 2013 +0100

--
 CHANGES.txt| 3 +++
 src/java/org/apache/cassandra/cql3/CFPropDefs.java | 5 +
 src/java/org/apache/cassandra/cql3/KSPropDefs.java | 5 +
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 22a121e..5086440 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+1.2.14
+ * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
  * Randomize batchlog candidates selection (CASSANDRA-6481)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/CFPropDefs.java 
b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
index 8ad29fd..d60b60c 100644
--- a/src/java/org/apache/cassandra/cql3/CFPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
@@ -76,6 +76,11 @@ public class CFPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the comapction strategy class is already set as 
it means we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (compactionStrategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String compactionOptions = getCompactionOptions();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/KSPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/KSPropDefs.java 
b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
index 70df622..e2b0de8 100644
--- a/src/java/org/apache/cassandra/cql3/KSPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
@@ -44,6 +44,11 @@ public class KSPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the strategy class is already set as it means 
we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (strategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String replicationOptions = getReplicationOptions();



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7255b5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7255b5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7255b5f

Branch: refs/heads/cassandra-2.0
Commit: f7255b5ffa2edb30e909220ccc3f7308b9f65475
Parents: 53af91e 079f1e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:15:30 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:15:30 2013 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/cql3/CFPropDefs.java | 5 +
 src/java/org/apache/cassandra/cql3/KSPropDefs.java | 5 +
 3 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/CHANGES.txt
--
diff --cc CHANGES.txt
index b8757d7,5086440..80ed481
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -29,45 -22,9 +29,46 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
++ * Allow executing CREATE statements multiple times (CASSANDRA-6471)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/src/java/org/apache/cassandra/cql3/KSPropDefs.java
--
diff --cc src/java/org/apache/cassandra/cql3/KSPropDefs.java
index c10a79b,e2b0de8..12fbc51
--- a/src/java/org/apache/cassandra/cql3/KSPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
@@@ -40,8 -42,13 +40,13 @@@ public class KSPropDefs extends Propert
  
  private String strategyClass;
  
 -public void validate() throws ConfigurationException, SyntaxException
 +public void validate() throws SyntaxException
  {
+ // Skip validation if the strategy class is already set as it means 
we've alreayd
+ // prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+ if (strategyClass != null)
+ 

[3/3] git commit: Fix infinite loop when paging queries with IN

2013-12-18 Thread slebresne
Fix infinite loop when paging queries with IN

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6464


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c32ffbb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c32ffbb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c32ffbb

Branch: refs/heads/cassandra-2.0
Commit: 7c32ffbbfae9959edc89ec5fcf9fced1b75c495b
Parents: f7255b5
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:18:30 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:18:30 2013 +0100

--
 CHANGES.txt |  1 +
 .../service/pager/AbstractQueryPager.java   |  6 +-
 .../service/pager/MultiPartitionPager.java  | 89 +---
 .../service/pager/NamesQueryPager.java  |  5 +-
 .../cassandra/service/pager/QueryPagers.java|  5 +-
 .../service/pager/SinglePartitionPager.java |  3 +
 .../service/pager/SliceQueryPager.java  |  5 ++
 7 files changed, 76 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 80ed481..5a124ab 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Expose a total memtable size metric for a CF (CASSANDRA-6391)
  * cqlsh: handle symlinks properly (CASSANDRA-6425)
  * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
+ * Fix potential infinite loop when paging query with IN (CASSANDRA-6464)
 Merged from 1.2:
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
  * Randomize batchlog candidates selection (CASSANDRA-6481)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
--
diff --git 
a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java 
b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
index 9372665..6f6772c 100644
--- a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
+++ b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
@@ -40,9 +40,9 @@ abstract class AbstractQueryPager implements QueryPager
 protected final IDiskAtomFilter columnFilter;
 private final long timestamp;
 
-private volatile int remaining;
-private volatile boolean exhausted;
-private volatile boolean lastWasRecorded;
+private int remaining;
+private boolean exhausted;
+private boolean lastWasRecorded;
 
 protected AbstractQueryPager(ConsistencyLevel consistencyLevel,
  int toFetch,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
--
diff --git 
a/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java 
b/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
index 2615e9b..35d6752 100644
--- a/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
+++ b/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
@@ -43,44 +43,72 @@ class MultiPartitionPager implements QueryPager
 private final SinglePartitionPager[] pagers;
 private final long timestamp;
 
-private volatile int current;
-
-MultiPartitionPager(ListReadCommand commands, ConsistencyLevel 
consistencyLevel, boolean localQuery)
-{
-this(commands, consistencyLevel, localQuery, null);
-}
+private int remaining;
+private int current;
 
 MultiPartitionPager(ListReadCommand commands, ConsistencyLevel 
consistencyLevel, boolean localQuery, PagingState state)
 {
-this.pagers = new SinglePartitionPager[commands.size()];
+int i = 0;
+// If it's not the beginning (state != null), we need to find where we 
were and skip previous commands
+// since they are done.
+if (state != null)
+for (; i  commands.size(); i++)
+if (commands.get(i).key.equals(state.partitionKey))
+break;
+
+if (i = commands.size())
+{
+pagers = null;
+timestamp = -1;
+return;
+}
+
+pagers = new SinglePartitionPager[commands.size() - i];
+// 'i' is on the first non exhausted pager for the previous page (or 
the first one)
+pagers[0] = makePager(commands.get(i), consistencyLevel, localQuery, 
state);
+timestamp = commands.get(i).timestamp;
 
-long tstamp = -1;
-for (int i = 0; i  

[1/3] git commit: Allow executing CREATE statement multiple times

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 53af91e65 - 7c32ffbbf


Allow executing CREATE statement multiple times

patch by slebresne; reviewed by jbellis for CASSANDRA-6471


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/079f1e81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/079f1e81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/079f1e81

Branch: refs/heads/cassandra-2.0
Commit: 079f1e81166579c5da0bdde76be7c9201d2e1711
Parents: 1b4c9b4
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:13:54 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:13:54 2013 +0100

--
 CHANGES.txt| 3 +++
 src/java/org/apache/cassandra/cql3/CFPropDefs.java | 5 +
 src/java/org/apache/cassandra/cql3/KSPropDefs.java | 5 +
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 22a121e..5086440 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+1.2.14
+ * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
  * Randomize batchlog candidates selection (CASSANDRA-6481)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/CFPropDefs.java 
b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
index 8ad29fd..d60b60c 100644
--- a/src/java/org/apache/cassandra/cql3/CFPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
@@ -76,6 +76,11 @@ public class CFPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the comapction strategy class is already set as 
it means we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (compactionStrategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String compactionOptions = getCompactionOptions();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/KSPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/KSPropDefs.java 
b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
index 70df622..e2b0de8 100644
--- a/src/java/org/apache/cassandra/cql3/KSPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
@@ -44,6 +44,11 @@ public class KSPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the strategy class is already set as it means 
we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (strategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String replicationOptions = getReplicationOptions();



[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7255b5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7255b5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7255b5f

Branch: refs/heads/trunk
Commit: f7255b5ffa2edb30e909220ccc3f7308b9f65475
Parents: 53af91e 079f1e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:15:30 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:15:30 2013 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/cql3/CFPropDefs.java | 5 +
 src/java/org/apache/cassandra/cql3/KSPropDefs.java | 5 +
 3 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/CHANGES.txt
--
diff --cc CHANGES.txt
index b8757d7,5086440..80ed481
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -29,45 -22,9 +29,46 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
++ * Allow executing CREATE statements multiple times (CASSANDRA-6471)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7255b5f/src/java/org/apache/cassandra/cql3/KSPropDefs.java
--
diff --cc src/java/org/apache/cassandra/cql3/KSPropDefs.java
index c10a79b,e2b0de8..12fbc51
--- a/src/java/org/apache/cassandra/cql3/KSPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
@@@ -40,8 -42,13 +40,13 @@@ public class KSPropDefs extends Propert
  
  private String strategyClass;
  
 -public void validate() throws ConfigurationException, SyntaxException
 +public void validate() throws SyntaxException
  {
+ // Skip validation if the strategy class is already set as it means 
we've alreayd
+ // prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+ if (strategyClass != null)
+ return;
+ 

[5/5] git commit: Fix merge

2013-12-18 Thread slebresne
Fix merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a5ebd6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a5ebd6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a5ebd6a

Branch: refs/heads/trunk
Commit: 1a5ebd6a7d65af8393c9d5c67e7de8088bbb1d0f
Parents: 1273476
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:23:44 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:23:44 2013 +0100

--
 src/java/org/apache/cassandra/service/pager/SliceQueryPager.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a5ebd6a/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java
--
diff --git a/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java 
b/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java
index 9d8d62c..fbc36e0 100644
--- a/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java
+++ b/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java
@@ -17,6 +17,7 @@
  */
 package org.apache.cassandra.service.pager;
 
+import java.nio.ByteBuffer;
 import java.util.Collections;
 import java.util.List;
 



[4/5] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-18 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1273476f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1273476f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1273476f

Branch: refs/heads/trunk
Commit: 1273476f95447a0661ddc7789067cc9a0e085f5b
Parents: 6635cde 7c32ffb
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:19:39 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:19:39 2013 +0100

--
 CHANGES.txt |  2 +
 .../org/apache/cassandra/cql3/CFPropDefs.java   |  5 ++
 .../org/apache/cassandra/cql3/KSPropDefs.java   |  5 ++
 .../service/pager/AbstractQueryPager.java   |  6 +-
 .../service/pager/MultiPartitionPager.java  | 89 +---
 .../service/pager/NamesQueryPager.java  |  5 +-
 .../cassandra/service/pager/QueryPagers.java|  5 +-
 .../service/pager/SinglePartitionPager.java |  3 +
 .../service/pager/SliceQueryPager.java  |  5 ++
 9 files changed, 87 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1273476f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1273476f/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1273476f/src/java/org/apache/cassandra/service/pager/QueryPagers.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1273476f/src/java/org/apache/cassandra/service/pager/SliceQueryPager.java
--



[1/5] git commit: Allow executing CREATE statement multiple times

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/trunk 6635cde3a - 1a5ebd6a7


Allow executing CREATE statement multiple times

patch by slebresne; reviewed by jbellis for CASSANDRA-6471


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/079f1e81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/079f1e81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/079f1e81

Branch: refs/heads/trunk
Commit: 079f1e81166579c5da0bdde76be7c9201d2e1711
Parents: 1b4c9b4
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:13:54 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:13:54 2013 +0100

--
 CHANGES.txt| 3 +++
 src/java/org/apache/cassandra/cql3/CFPropDefs.java | 5 +
 src/java/org/apache/cassandra/cql3/KSPropDefs.java | 5 +
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 22a121e..5086440 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+1.2.14
+ * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
  * Randomize batchlog candidates selection (CASSANDRA-6481)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/CFPropDefs.java 
b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
index 8ad29fd..d60b60c 100644
--- a/src/java/org/apache/cassandra/cql3/CFPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
@@ -76,6 +76,11 @@ public class CFPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the comapction strategy class is already set as 
it means we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (compactionStrategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String compactionOptions = getCompactionOptions();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/079f1e81/src/java/org/apache/cassandra/cql3/KSPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/KSPropDefs.java 
b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
index 70df622..e2b0de8 100644
--- a/src/java/org/apache/cassandra/cql3/KSPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/KSPropDefs.java
@@ -44,6 +44,11 @@ public class KSPropDefs extends PropertyDefinitions
 
 public void validate() throws ConfigurationException, SyntaxException
 {
+// Skip validation if the strategy class is already set as it means 
we've alreayd
+// prepared (and redoing it would set strategyClass back to null, 
which we don't want)
+if (strategyClass != null)
+return;
+
 validate(keywords, obsoleteKeywords);
 
 MapString, String replicationOptions = getReplicationOptions();



[3/5] git commit: Fix infinite loop when paging queries with IN

2013-12-18 Thread slebresne
Fix infinite loop when paging queries with IN

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6464


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c32ffbb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c32ffbb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c32ffbb

Branch: refs/heads/trunk
Commit: 7c32ffbbfae9959edc89ec5fcf9fced1b75c495b
Parents: f7255b5
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 11:18:30 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 11:18:30 2013 +0100

--
 CHANGES.txt |  1 +
 .../service/pager/AbstractQueryPager.java   |  6 +-
 .../service/pager/MultiPartitionPager.java  | 89 +---
 .../service/pager/NamesQueryPager.java  |  5 +-
 .../cassandra/service/pager/QueryPagers.java|  5 +-
 .../service/pager/SinglePartitionPager.java |  3 +
 .../service/pager/SliceQueryPager.java  |  5 ++
 7 files changed, 76 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 80ed481..5a124ab 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Expose a total memtable size metric for a CF (CASSANDRA-6391)
  * cqlsh: handle symlinks properly (CASSANDRA-6425)
  * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
+ * Fix potential infinite loop when paging query with IN (CASSANDRA-6464)
 Merged from 1.2:
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
  * Randomize batchlog candidates selection (CASSANDRA-6481)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
--
diff --git 
a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java 
b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
index 9372665..6f6772c 100644
--- a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
+++ b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
@@ -40,9 +40,9 @@ abstract class AbstractQueryPager implements QueryPager
 protected final IDiskAtomFilter columnFilter;
 private final long timestamp;
 
-private volatile int remaining;
-private volatile boolean exhausted;
-private volatile boolean lastWasRecorded;
+private int remaining;
+private boolean exhausted;
+private boolean lastWasRecorded;
 
 protected AbstractQueryPager(ConsistencyLevel consistencyLevel,
  int toFetch,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c32ffbb/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
--
diff --git 
a/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java 
b/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
index 2615e9b..35d6752 100644
--- a/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
+++ b/src/java/org/apache/cassandra/service/pager/MultiPartitionPager.java
@@ -43,44 +43,72 @@ class MultiPartitionPager implements QueryPager
 private final SinglePartitionPager[] pagers;
 private final long timestamp;
 
-private volatile int current;
-
-MultiPartitionPager(ListReadCommand commands, ConsistencyLevel 
consistencyLevel, boolean localQuery)
-{
-this(commands, consistencyLevel, localQuery, null);
-}
+private int remaining;
+private int current;
 
 MultiPartitionPager(ListReadCommand commands, ConsistencyLevel 
consistencyLevel, boolean localQuery, PagingState state)
 {
-this.pagers = new SinglePartitionPager[commands.size()];
+int i = 0;
+// If it's not the beginning (state != null), we need to find where we 
were and skip previous commands
+// since they are done.
+if (state != null)
+for (; i  commands.size(); i++)
+if (commands.get(i).key.equals(state.partitionKey))
+break;
+
+if (i = commands.size())
+{
+pagers = null;
+timestamp = -1;
+return;
+}
+
+pagers = new SinglePartitionPager[commands.size() - i];
+// 'i' is on the first non exhausted pager for the previous page (or 
the first one)
+pagers[0] = makePager(commands.get(i), consistencyLevel, localQuery, 
state);
+timestamp = commands.get(i).timestamp;
 
-long tstamp = -1;
-for (int i = 0; i  

[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2013-12-18 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851561#comment-13851561
 ] 

Lyuben Todorov commented on CASSANDRA-6157:
---

patch does what it says however there are some nits: 

In DatabaseDescriptor#setHintedHandoffEnabledOverride, why do you create a new 
map and set {{conf.hinted_handoff_enabled_override_by_dc}} to said new map 
instead of just adding the key and value directly to 
{{conf.hinted_handoff_enabled_override_by_dc}}?

Code formatting:
{noformat}
// braces should be on a new line
1. StorageProxy.shouldHint(...)
  if(!DatabaseDescriptor.shouldHintForDC(dc)) {
2. NodeProbe.enableHintedHandoff(...) {
3. NodeProbe.disableHintedHandoff(...) {
{noformat}

HintedHandoffEnabledOverride seems somewhat confusing, could we possibly change 
it to HintedHandoffPerDC or HintedHandoffDCOverride ([~jbellis] WDUT?)

 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.4

 Attachments: trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5872) Bundle JNA

2013-12-18 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851573#comment-13851573
 ] 

Lyuben Todorov commented on CASSANDRA-5872:
---

I added a [https://github.com/lyubent/cassandra/tree/5872|branch] with the JNA 
libs under the Apache License Version 2.

 Bundle JNA
 --

 Key: CASSANDRA-5872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5872
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.1


 JNA 4.0 is reported to be dual-licensed LGPL/APL.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6501) Cannot run pig examples on current 2.0 branch

2013-12-18 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-6501:
---

 Summary: Cannot run pig examples on current 2.0 branch
 Key: CASSANDRA-6501
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6501
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Jeremy Hanna


I checked out the cassandra-2.0 branch to try the pig examples because the 
2.0.3 release has the CASSANDRA-6309 problem which is fixed on the branch.  I 
tried to run both the cql and the CassandraStorage examples in local mode with 
pig 0.10.1, 0.11.1, and 0.12.0 and all of them give the following error and 
stack trace:

{quote}
ERROR 2998: Unhandled internal error. readLength_

java.lang.NoSuchFieldError: readLength_
at 
org.apache.cassandra.thrift.TBinaryProtocol$Factory.getProtocol(TBinaryProtocol.java:57)
at org.apache.thrift.TSerializer.init(TSerializer.java:66)
at 
org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cfdefToString(AbstractCassandraStorage.java:508)
at 
org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.initSchema(AbstractCassandraStorage.java:470)
at 
org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:318)
at 
org.apache.cassandra.hadoop.pig.CassandraStorage.getSchema(CassandraStorage.java:357)
at 
org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:151)
at 
org.apache.pig.newplan.logical.relational.LOLoad.getSchema(LOLoad.java:110)
at 
org.apache.pig.parser.LogicalPlanGenerator.alias_col_ref(LogicalPlanGenerator.java:15356)
at 
org.apache.pig.parser.LogicalPlanGenerator.col_ref(LogicalPlanGenerator.java:15203)
at 
org.apache.pig.parser.LogicalPlanGenerator.projectable_expr(LogicalPlanGenerator.java:8881)
at 
org.apache.pig.parser.LogicalPlanGenerator.var_expr(LogicalPlanGenerator.java:8632)
at 
org.apache.pig.parser.LogicalPlanGenerator.expr(LogicalPlanGenerator.java:7984)
at 
org.apache.pig.parser.LogicalPlanGenerator.flatten_generated_item(LogicalPlanGenerator.java:5962)
at 
org.apache.pig.parser.LogicalPlanGenerator.generate_clause(LogicalPlanGenerator.java:14101)
at 
org.apache.pig.parser.LogicalPlanGenerator.foreach_plan(LogicalPlanGenerator.java:12493)
at 
org.apache.pig.parser.LogicalPlanGenerator.foreach_clause(LogicalPlanGenerator.java:12360)
at 
org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1577)
at 
org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789)
at 
org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507)
at 
org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382)
at 
org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540)
at org.apache.pig.PigServer.registerQuery(PigServer.java:540)
at 
org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970)
at 
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:555)
at org.apache.pig.Main.main(Main.java:111)

{quote}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851649#comment-13851649
 ] 

Sylvain Lebresne commented on CASSANDRA-5745:
-

bq. A pretty simple tweak we could make would be to allow tombstone compactions 
to include L+1 overlaps

If you meant that we'd include only L+1 overlaps that meets the criteria of 
more purgable tombstone that the threshold, i.e. that we'd basically just 
make tombstone compaction better at purging tombstone, then I'm definitively 
for it. That's kind of what I intended by 'add one more compaction heuristic 
like we already have the try compacting a sstable if it's has more than X% 
gcable stuffs and you have nothing better to do' (though it's definitively a 
simpler thing to add that the heuristic I described) :).

I you meant we've include all L+1 overlaps no matter what, I do am worried 
about the added overhead in general.

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
 Fix For: 2.0.4


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


git commit: Don't send confusing info on timeouts

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 079f1e811 - b73178d86


Don't send confusing info on timeouts

patch by slebresne; reviewed by jbellis for CASSANDRA-6491


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b73178d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b73178d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b73178d8

Branch: refs/heads/cassandra-1.2
Commit: b73178d8626ec7fb404c8ded442ecff23192f14f
Parents: 079f1e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:48:26 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:48:26 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  8 +++-
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5086440..a1514d0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.14
  * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+ * Don't send confusing info with timeouts (CASSANDRA-6491)
 
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
--
diff --git 
a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java 
b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
index 4df9e1f..3cd853f 100644
--- a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
@@ -88,7 +88,16 @@ public abstract class AbstractWriteResponseHandler 
implements IAsyncCallback
 }
 
 if (!success)
-throw new WriteTimeoutException(writeType, consistencyLevel, 
ackCount(), totalBlockFor());
+{
+int acks = ackCount();
+int blockedFor = totalBlockFor();
+// It's pretty unlikely, but we can race between exiting await 
above and here, so
+// that we could now have enough acks. In that case, we lie on 
the acks count to
+// avoid sending confusing info to the user (see CASSANDRA-).
+if (acks = blockedFor)
+acks = blockedFor - 1;
+throw new WriteTimeoutException(writeType, consistencyLevel, acks, 
blockedFor);
+}
 }
 
 protected int totalBlockFor()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java 
b/src/java/org/apache/cassandra/service/ReadCallback.java
index 64b9e76..7889039 100644
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@ -97,7 +97,13 @@ public class ReadCallbackTMessage, TResolved implements 
IAsyncCallbackTMessag
 }
 
 if (!success)
-throw new ReadTimeoutException(consistencyLevel, received.get(), 
blockfor, resolver.isDataPresent());
+{
+// Same as for writes, see AbstractWriteResponseHandler
+int acks = received.get();
+if (resolver.isDataPresent()  acks = blockfor)
+acks = blockfor - 1;
+throw new ReadTimeoutException(consistencyLevel, acks, blockfor, 
resolver.isDataPresent());
+}
 
 return blockfor == 1 ? resolver.getData() : resolver.resolve();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index d49e59d..e6cd755 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1013,7 +1013,7 @@ public class StorageProxy implements StorageProxyMBean
 {
 Tracing.trace(Timed out on digest mismatch retries);
 int blockFor = 
consistency_level.blockFor(Table.open(command.getKeyspace()));
-throw new 

[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/ReadCallback.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1727ea77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1727ea77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1727ea77

Branch: refs/heads/cassandra-2.0
Commit: 1727ea773324b9a8afd41b5d5d238aee1dd8f441
Parents: 7c32ffb b73178d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:51:23 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:51:23 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  4 
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/CHANGES.txt
--
diff --cc CHANGES.txt
index 5a124ab,a1514d0..10c9c33
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,46 -23,9 +30,47 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
 + * Allow executing CREATE statements multiple times (CASSANDRA-6471)
++ * Don't send confusing info with timeouts (CASSANDRA-6491)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --cc src/java/org/apache/cassandra/service/ReadCallback.java
index d4cc7f5,7889039..d665242
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@@ -89,16 -95,14 +89,20 @@@ public class ReadCallbackTMessage, TRe
  {
  throw new AssertionError(ex);
  }
 +}
  
 

[1/2] git commit: Don't send confusing info on timeouts

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 7c32ffbbf - 1727ea773


Don't send confusing info on timeouts

patch by slebresne; reviewed by jbellis for CASSANDRA-6491


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b73178d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b73178d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b73178d8

Branch: refs/heads/cassandra-2.0
Commit: b73178d8626ec7fb404c8ded442ecff23192f14f
Parents: 079f1e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:48:26 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:48:26 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  8 +++-
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5086440..a1514d0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.14
  * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+ * Don't send confusing info with timeouts (CASSANDRA-6491)
 
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
--
diff --git 
a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java 
b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
index 4df9e1f..3cd853f 100644
--- a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
@@ -88,7 +88,16 @@ public abstract class AbstractWriteResponseHandler 
implements IAsyncCallback
 }
 
 if (!success)
-throw new WriteTimeoutException(writeType, consistencyLevel, 
ackCount(), totalBlockFor());
+{
+int acks = ackCount();
+int blockedFor = totalBlockFor();
+// It's pretty unlikely, but we can race between exiting await 
above and here, so
+// that we could now have enough acks. In that case, we lie on 
the acks count to
+// avoid sending confusing info to the user (see CASSANDRA-).
+if (acks = blockedFor)
+acks = blockedFor - 1;
+throw new WriteTimeoutException(writeType, consistencyLevel, acks, 
blockedFor);
+}
 }
 
 protected int totalBlockFor()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java 
b/src/java/org/apache/cassandra/service/ReadCallback.java
index 64b9e76..7889039 100644
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@ -97,7 +97,13 @@ public class ReadCallbackTMessage, TResolved implements 
IAsyncCallbackTMessag
 }
 
 if (!success)
-throw new ReadTimeoutException(consistencyLevel, received.get(), 
blockfor, resolver.isDataPresent());
+{
+// Same as for writes, see AbstractWriteResponseHandler
+int acks = received.get();
+if (resolver.isDataPresent()  acks = blockfor)
+acks = blockfor - 1;
+throw new ReadTimeoutException(consistencyLevel, acks, blockfor, 
resolver.isDataPresent());
+}
 
 return blockfor == 1 ? resolver.getData() : resolver.resolve();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index d49e59d..e6cd755 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1013,7 +1013,7 @@ public class StorageProxy implements StorageProxyMBean
 {
 Tracing.trace(Timed out on digest mismatch retries);
 int blockFor = 
consistency_level.blockFor(Table.open(command.getKeyspace()));
-throw new 

[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-18 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d365faab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d365faab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d365faab

Branch: refs/heads/trunk
Commit: d365faab07f14d436bf827b3cf6dae5d25e4c9c1
Parents: 1a5ebd6 1727ea7
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:51:48 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:51:48 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  4 
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d365faab/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d365faab/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index 6f362db,2da9d38..e4741f6
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -1664,33 -1577,17 +1664,33 @@@ public class StorageProxy implements St
  Tracing.trace(Timed out while read-repairing after 
receiving all {} data and digest responses, blockFor);
  else
  logger.debug(Range slice timeout while 
read-repairing after receiving all {} data and digest responses, blockFor);
- throw new ReadTimeoutException(consistency_level, 
blockFor, blockFor, true);
+ throw new ReadTimeoutException(consistency_level, 
blockFor-1, blockFor, true);
  }
 -catch (DigestMismatchException e)
 +
 +if (haveSufficientRows)
 +return trim(command, rows);
 +
 +// we didn't get enough rows in our concurrent fetch; 
recalculate our concurrency factor
 +// based on the results we've seen so far (as long as we 
still have ranges left to query)
 +if (i  ranges.size())
  {
 -throw new AssertionError(e); // no digests in range 
slices yet
 +float fetchedRows = command.countCQL3Rows() ? 
cql3RowCount : rows.size();
 +float remainingRows = command.limit() - fetchedRows;
 +float actualRowsPerRange;
 +if (fetchedRows == 0.0)
 +{
 +// we haven't actually gotten any results, so query 
all remaining ranges at once
 +actualRowsPerRange = 0.0f;
 +concurrencyFactor = ranges.size() - i;
 +}
 +else
 +{
 +actualRowsPerRange = i / fetchedRows;
 +concurrencyFactor = Math.max(1, 
Math.min(ranges.size() - i, Math.round(remainingRows / actualRowsPerRange)));
 +}
 +logger.debug(Didn't get enough response rows; actual 
rows per range: {}; remaining rows: {}, new concurrent requests: {},
 + actualRowsPerRange, (int) remainingRows, 
concurrencyFactor);
  }
 -
 -// if we're done, great, otherwise, move to the next range
 -int count = nodeCmd.countCQL3Rows() ? cql3RowCount : 
rows.size();
 -if (count = nodeCmd.limit())
 -break;
  }
  }
  finally



[1/3] git commit: Don't send confusing info on timeouts

2013-12-18 Thread slebresne
Updated Branches:
  refs/heads/trunk 1a5ebd6a7 - d365faab0


Don't send confusing info on timeouts

patch by slebresne; reviewed by jbellis for CASSANDRA-6491


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b73178d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b73178d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b73178d8

Branch: refs/heads/trunk
Commit: b73178d8626ec7fb404c8ded442ecff23192f14f
Parents: 079f1e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:48:26 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:48:26 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  8 +++-
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5086440..a1514d0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.14
  * Allow executing CREATE statements multiple times (CASSANDRA-6471)
+ * Don't send confusing info with timeouts (CASSANDRA-6491)
 
 1.2.13
  * Improved error message on bad properties in DDL queries (CASSANDRA-6453)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
--
diff --git 
a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java 
b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
index 4df9e1f..3cd853f 100644
--- a/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
@@ -88,7 +88,16 @@ public abstract class AbstractWriteResponseHandler 
implements IAsyncCallback
 }
 
 if (!success)
-throw new WriteTimeoutException(writeType, consistencyLevel, 
ackCount(), totalBlockFor());
+{
+int acks = ackCount();
+int blockedFor = totalBlockFor();
+// It's pretty unlikely, but we can race between exiting await 
above and here, so
+// that we could now have enough acks. In that case, we lie on 
the acks count to
+// avoid sending confusing info to the user (see CASSANDRA-).
+if (acks = blockedFor)
+acks = blockedFor - 1;
+throw new WriteTimeoutException(writeType, consistencyLevel, acks, 
blockedFor);
+}
 }
 
 protected int totalBlockFor()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java 
b/src/java/org/apache/cassandra/service/ReadCallback.java
index 64b9e76..7889039 100644
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@ -97,7 +97,13 @@ public class ReadCallbackTMessage, TResolved implements 
IAsyncCallbackTMessag
 }
 
 if (!success)
-throw new ReadTimeoutException(consistencyLevel, received.get(), 
blockfor, resolver.isDataPresent());
+{
+// Same as for writes, see AbstractWriteResponseHandler
+int acks = received.get();
+if (resolver.isDataPresent()  acks = blockfor)
+acks = blockfor - 1;
+throw new ReadTimeoutException(consistencyLevel, acks, blockfor, 
resolver.isDataPresent());
+}
 
 return blockfor == 1 ? resolver.getData() : resolver.resolve();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b73178d8/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index d49e59d..e6cd755 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1013,7 +1013,7 @@ public class StorageProxy implements StorageProxyMBean
 {
 Tracing.trace(Timed out on digest mismatch retries);
 int blockFor = 
consistency_level.blockFor(Table.open(command.getKeyspace()));
-throw new 

[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-18 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/ReadCallback.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1727ea77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1727ea77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1727ea77

Branch: refs/heads/trunk
Commit: 1727ea773324b9a8afd41b5d5d238aee1dd8f441
Parents: 7c32ffb b73178d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 18 13:51:23 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 18 13:51:23 2013 +0100

--
 CHANGES.txt  |  1 +
 .../cassandra/service/AbstractWriteResponseHandler.java  | 11 ++-
 src/java/org/apache/cassandra/service/ReadCallback.java  |  4 
 src/java/org/apache/cassandra/service/StorageProxy.java  |  4 ++--
 4 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/CHANGES.txt
--
diff --cc CHANGES.txt
index 5a124ab,a1514d0..10c9c33
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,46 -23,9 +30,47 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
 + * Allow executing CREATE statements multiple times (CASSANDRA-6471)
++ * Don't send confusing info with timeouts (CASSANDRA-6491)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/src/java/org/apache/cassandra/service/AbstractWriteResponseHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1727ea77/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --cc src/java/org/apache/cassandra/service/ReadCallback.java
index d4cc7f5,7889039..d665242
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@@ -89,16 -95,14 +89,20 @@@ public class ReadCallbackTMessage, TRe
  {
  throw new AssertionError(ex);
  }
 +}
  
 -

[jira] [Commented] (CASSANDRA-6480) Custom secondary index options in CQL3

2013-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851665#comment-13851665
 ] 

Andrés de la Peña commented on CASSANDRA-6480:
--

[https://github.com/Stratio/cassandra/commit/40ea80da906aaa5ec752fdb036fb4e9de7249409]
 
[https://github.com/Stratio/cassandra/commit/40ea80da906aaa5ec752fdb036fb4e9de7249409.patch]

We have changed  the CREATE CUSTOM INDEX syntax to:
{code}
CREATE CUSTOM INDEX ON users (spanish_text) USING 'org.stratio.FullTextIndex' 
WITH
{'analyzer': 'SpanishAnalyzer', 'storage':'/mnt/ssd/indexes/'};
{code}
Furthermore, it also accepts:
{code}
CREATE CUSTOM INDEX ON users (spanish_text) WITH 
{'class_name' : 'org.stratio.FullTextIndex', 'analyzer': 'SpanishAnalyzer', 
'storage':'/mnt/ssd/indexes/'};
{code}
And if no options are required:
{code}
CREATE CUSTOM INDEX ON users (spanish_text) USING 'org.stratio.FullTextIndex';
{code}
We have tried to maintain compatibility with the current syntax. 

 Custom secondary index options in CQL3
 --

 Key: CASSANDRA-6480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6480
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Andrés de la Peña
Priority: Minor
  Labels: cql3, index

 The CQL3 create index statement syntax does not allow to specify the 
 options map internally used by custom indexes. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6480) Custom secondary index options in CQL3

2013-12-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6480:
-

Reviewer: Aleksey Yeschenko

 Custom secondary index options in CQL3
 --

 Key: CASSANDRA-6480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6480
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Andrés de la Peña
Priority: Minor
  Labels: cql3, index

 The CQL3 create index statement syntax does not allow to specify the 
 options map internally used by custom indexes. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-5872) Bundle JNA

2013-12-18 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851573#comment-13851573
 ] 

Lyuben Todorov edited comment on CASSANDRA-5872 at 12/18/13 2:11 PM:
-

I added a [branch|https://github.com/lyubent/cassandra/tree/5872] with the JNA 
libs under the Apache License Version 2.


was (Author: lyubent):
I added a [https://github.com/lyubent/cassandra/tree/5872|branch] with the JNA 
libs under the Apache License Version 2.

 Bundle JNA
 --

 Key: CASSANDRA-5872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5872
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.1


 JNA 4.0 is reported to be dual-licensed LGPL/APL.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6502) Caused by: InvalidRequestException(why:Invalid restrictions)

2013-12-18 Thread Akshay DM (JIRA)
Akshay DM created CASSANDRA-6502:


 Summary: Caused by: InvalidRequestException(why:Invalid 
restrictions)
 Key: CASSANDRA-6502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6502
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Hadoop
 Environment: CentOS release 6.4
Reporter: Akshay DM


I am using Cassandra 1.2.10 and have a primary key column which is of timestamp 
datatype. Now I am trying to retrieve the data for the date ranges. Since we 
know we can't use between clause, we are using greater than() and less than() 
to get the date ranges. This perfectly seems to work in cassandra's cqlsh. But 
with pig_cassandra integration, it throws an error. Here is the load function.

{quote}
filteredData = LOAD 
'cql://keyspace/columnfamily?where_clause=time1%3E1357054841000590+and+time1%3C1357121822000430'
 USING org.apache.cassandra.hadoop.pig.CqlStorage();
{quote}

Here is the error it throws..

{quote}
2013-12-18 04:32:51,196 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 
2997: Unable to recreate exception from backed error: java.lang.RuntimeException
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:651)
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:352)
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:275)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.getProgress(CqlPagingRecordReader.java:181)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.getProgress(PigRecordReader.java:169)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: InvalidRequestException(why:Invalid restrictions found on time1)
at 
org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
at 
org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
at 
org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
... 17 more
{quote}




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6502) Caused by: InvalidRequestException(why:Invalid restrictions)

2013-12-18 Thread Akshay DM (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay DM updated CASSANDRA-6502:
-

Reviewer: Alex Liu

 Caused by: InvalidRequestException(why:Invalid restrictions)
 

 Key: CASSANDRA-6502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6502
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Hadoop
 Environment: CentOS release 6.4
Reporter: Akshay DM

 I am using Cassandra 1.2.10 and have a primary key column which is of 
 timestamp datatype. Now I am trying to retrieve the data for the date ranges. 
 Since we know we can't use between clause, we are using greater than() and 
 less than() to get the date ranges. This perfectly seems to work in 
 cassandra's cqlsh. But with pig_cassandra integration, it throws an error. 
 Here is the load function.
 {quote}
 filteredData = LOAD 
 'cql://keyspace/columnfamily?where_clause=time1%3E1357054841000590+and+time1%3C1357121822000430'
  USING org.apache.cassandra.hadoop.pig.CqlStorage();
 {quote}
 Here is the error it throws..
 {quote}
 2013-12-18 04:32:51,196 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 
 2997: Unable to recreate exception from backed error: 
 java.lang.RuntimeException
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:651)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:352)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:275)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.getProgress(CqlPagingRecordReader.java:181)
 at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.getProgress(PigRecordReader.java:169)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539)
 at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: InvalidRequestException(why:Invalid restrictions found on time1)
 at 
 org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
 ... 17 more
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6502) Caused by: InvalidRequestException(why:Invalid restrictions)

2013-12-18 Thread Akshay DM (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay DM updated CASSANDRA-6502:
-

Reviewer:   (was: Alex Liu)

 Caused by: InvalidRequestException(why:Invalid restrictions)
 

 Key: CASSANDRA-6502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6502
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Hadoop
 Environment: CentOS release 6.4
Reporter: Akshay DM

 I am using Cassandra 1.2.10 and have a primary key column which is of 
 timestamp datatype. Now I am trying to retrieve the data for the date ranges. 
 Since we know we can't use between clause, we are using greater than() and 
 less than() to get the date ranges. This perfectly seems to work in 
 cassandra's cqlsh. But with pig_cassandra integration, it throws an error. 
 Here is the load function.
 {quote}
 filteredData = LOAD 
 'cql://keyspace/columnfamily?where_clause=time1%3E1357054841000590+and+time1%3C1357121822000430'
  USING org.apache.cassandra.hadoop.pig.CqlStorage();
 {quote}
 Here is the error it throws..
 {quote}
 2013-12-18 04:32:51,196 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 
 2997: Unable to recreate exception from backed error: 
 java.lang.RuntimeException
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:651)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:352)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:275)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.getProgress(CqlPagingRecordReader.java:181)
 at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.getProgress(PigRecordReader.java:169)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539)
 at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: InvalidRequestException(why:Invalid restrictions found on time1)
 at 
 org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
 ... 17 more
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851902#comment-13851902
 ] 

Nick Bailey commented on CASSANDRA-5745:


We're seeing this on STCS as well. Any ideas for how to handle it there? For 
our specific use case, special casing column families with a gc grace of 0 to 
ignore the overlap check would work.

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
 Fix For: 2.0.4


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6493) Exceptions when a second Datacenter is Added

2013-12-18 Thread Russell Alexander Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851904#comment-13851904
 ] 

Russell Alexander Spitzer commented on CASSANDRA-6493:
--

Fixed

 Exceptions when a second Datacenter is Added
 

 Key: CASSANDRA-6493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu, EC2 M1.large
Reporter: Russell Alexander Spitzer

 On adding a second datacenter several exceptions were raised.
 Test outline:
 Start 25 Node DC1
 Keyspace Setup Replication 3
 Begin insert against DC1 Using Stress
 While the inserts are occuring
 Start up 25 Node DC2
 Alter Keyspace to include Replication in 2nd DC
 Run rebuild on DC2
 Wait for stress to finish
 Run repair on Cluster
 ... Some other operations
 At the point when the second datacenter is added several warnings go off 
 because nodetool status is not functioning, and a few moments later the start 
 operation reports a failure because a node has not successfully turned on. 
 The first start attempt yielded the following exception on a node in the 
 second DC.
 {code}
 CassandraDaemon.java (line 464) Exception encountered during startup
 java.lang.AssertionError: -7560216458456714666 not found in 
 -9222060278673125462, -9220751250790085193, . ALL THE TOKENS ...,  
 9218575851928340117, 9219681798686280387
 at 
 org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:752)
 at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:696)
 at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangeFor(TokenMetadata.java:703)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getRangeAddresses(AbstractReplicationStrategy.java:187)
 at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithSourcesFor(RangeStreamer.java:147)
 at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:121)
 at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:979)
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:586)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:447)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:490)
 {code}
 The test automatically tries to restart nodes if they fail during startup, 
 The second attempt for this node succeeded but a 'nodetool status' still 
 failed and a different node in the second DC logged the following and failed 
 to start up.
 {code}
 ERROR [main] 2013-12-16 18:02:04,869 CassandraDaemon.java (line 464) 
 Exception encountered during startup
 java.util.ConcurrentModificationException
   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
   at java.util.TreeMap$KeyIterator.next(TreeMap.java:1169)
   at org.apache.commons.lang.StringUtils.join(StringUtils.java:3382)
   at org.apache.commons.lang.StringUtils.join(StringUtils.java:3444)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:752)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:696)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangeFor(TokenMetadata.java:703)
   at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getRangeAddresses(AbstractReplicationStrategy.java:187)
   at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithSourcesFor(RangeStreamer.java:147)
   at 
 org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:121)
   at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
   at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:979)
   at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:586)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:447)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:490)
 ERROR [StorageServiceShutdownHook] 2013-12-16 18:02:04,876 
 CassandraDaemon.java (line 

[jira] [Commented] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851914#comment-13851914
 ] 

Jonathan Ellis commented on CASSANDRA-6053:
---

[~enigmacurry] can you reproduce?

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (CASSANDRA-6493) Exceptions when a second Datacenter is Added

2013-12-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6493.
---

Resolution: Fixed

 Exceptions when a second Datacenter is Added
 

 Key: CASSANDRA-6493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu, EC2 M1.large
Reporter: Russell Alexander Spitzer

 On adding a second datacenter several exceptions were raised.
 Test outline:
 Start 25 Node DC1
 Keyspace Setup Replication 3
 Begin insert against DC1 Using Stress
 While the inserts are occuring
 Start up 25 Node DC2
 Alter Keyspace to include Replication in 2nd DC
 Run rebuild on DC2
 Wait for stress to finish
 Run repair on Cluster
 ... Some other operations
 At the point when the second datacenter is added several warnings go off 
 because nodetool status is not functioning, and a few moments later the start 
 operation reports a failure because a node has not successfully turned on. 
 The first start attempt yielded the following exception on a node in the 
 second DC.
 {code}
 CassandraDaemon.java (line 464) Exception encountered during startup
 java.lang.AssertionError: -7560216458456714666 not found in 
 -9222060278673125462, -9220751250790085193, . ALL THE TOKENS ...,  
 9218575851928340117, 9219681798686280387
 at 
 org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:752)
 at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:696)
 at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangeFor(TokenMetadata.java:703)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getRangeAddresses(AbstractReplicationStrategy.java:187)
 at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithSourcesFor(RangeStreamer.java:147)
 at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:121)
 at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:979)
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:586)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:447)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:490)
 {code}
 The test automatically tries to restart nodes if they fail during startup, 
 The second attempt for this node succeeded but a 'nodetool status' still 
 failed and a different node in the second DC logged the following and failed 
 to start up.
 {code}
 ERROR [main] 2013-12-16 18:02:04,869 CassandraDaemon.java (line 464) 
 Exception encountered during startup
 java.util.ConcurrentModificationException
   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
   at java.util.TreeMap$KeyIterator.next(TreeMap.java:1169)
   at org.apache.commons.lang.StringUtils.join(StringUtils.java:3382)
   at org.apache.commons.lang.StringUtils.join(StringUtils.java:3444)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:752)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:696)
   at 
 org.apache.cassandra.locator.TokenMetadata.getPrimaryRangeFor(TokenMetadata.java:703)
   at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getRangeAddresses(AbstractReplicationStrategy.java:187)
   at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithSourcesFor(RangeStreamer.java:147)
   at 
 org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:121)
   at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
   at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:979)
   at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:586)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:447)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:490)
 ERROR [StorageServiceShutdownHook] 2013-12-16 18:02:04,876 
 CassandraDaemon.java (line 191) Exception in thread 
 

[jira] [Commented] (CASSANDRA-6395) Add the ability to query by TimeUUIDs with milisecond granularity

2013-12-18 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851938#comment-13851938
 ] 

Tyler Hobbs commented on CASSANDRA-6395:


[~iamaleksey] it looks like the CQL version didn't get bumped for this change.  
The CQL3 language doc also needs to be updated to show the new format options.

 Add the ability to query by TimeUUIDs with milisecond granularity
 -

 Key: CASSANDRA-6395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6395
 Project: Cassandra
  Issue Type: New Feature
Reporter: Lorcan Coyle
Assignee: Lorcan Coyle
Priority: Minor
  Labels: lhf
 Fix For: 2.0.4


 Currently it is impossible to query for dates with the minTimeuuid and 
 maxTimeuuid functions with sub-second accuracy from cqlsh because the parser 
 doesn't recognise dates formatted at that granularity (e.g., 2013-09-30 
 22:19:06.591). By adding the following ISO8601 patterns to 
 TimestampSerializer this functionality is unlocked:
 -MM-dd HH:mm:ss.SSS,
 -MM-dd HH:mm:ss.SSSZ,
 -MM-dd'T'HH:mm:ss.SSS,
 -MM-dd'T'HH:mm:ss.SSSZ.
 I submitted this as a pull-request on the github mirror 
 (https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
 submit a patch to address this here.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851939#comment-13851939
 ] 

Jonathan Ellis commented on CASSANDRA-5745:
---

bq. If you meant that we'd include only L+1 overlaps that meets the criteria of 
more purgable tombstone that the threshold, i.e. that we'd basically just 
make tombstone compaction better at purging tombstone, then I'm definitively 
for it

That would have to be a pretty high threshold, because you have a pretty narrow 
window of either the result must fit in L, or you must include all overlaps 
from L+1.

bq. If you meant we've include all L+1 overlaps no matter what, I am worried 
about the added overhead in general.

I think we'd need to add more complexity then -- first try just dropping 
tombstones from the sstable by itself, then if that doesn't get it below the 
threshold merge it into L+1.

Minhash from CASSANDRA-6474 would let us get more sophisticated than just 
merge into L+1 -- we could say, merge with the top 10 most-overlapping 
sstables from any level and drop the result back down to L0.

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
 Fix For: 2.0.4


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851971#comment-13851971
 ] 

Ryan McGuire commented on CASSANDRA-6053:
-

First attempt appears to work correctly on cassandra-2.0 HEAD and 1.2.9 : 

{code}
12:53 PM:~$ ccm create -v git:cassandra-1.2.9 t
Fetching Cassandra updates...
Current cluster is now: t
12:53 PM:~$ ccm populate -n 5
12:54 PM:~$ ccm start
12:54 PM:~$ ccm node1 stress
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
24994,2499,2499,9.5,55.2,179.0,10
103123,7812,7812,2.8,27.2,134.7,20
236358,13323,13323,1.7,15.4,134.7,30
329477,9311,9311,1.7,9.8,109.8,40
405667,7619,7619,1.8,9.2,6591.9,50
558989,15332,15332,1.5,6.6,6591.1,60
^C12:55 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.2
 127.0.0.5
 127.0.0.4

cqlsh
12:55 PM:~$ ccm node2 decommission
12:57 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.5
 127.0.0.4

cqlsh
12:58 PM:~$
{code}

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851980#comment-13851980
 ] 

Ryan McGuire commented on CASSANDRA-6053:
-

I should note that the same query on all of the nodes is the same.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Issue Comment Deleted] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6053:


Comment: was deleted

(was: I should note that the same query on all of the nodes is the same.)

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851971#comment-13851971
 ] 

Ryan McGuire edited comment on CASSANDRA-6053 at 12/18/13 6:15 PM:
---

First attempt appears to work correctly on cassandra-2.0 HEAD and 1.2.9 : 

{code}
12:53 PM:~$ ccm create -v git:cassandra-1.2.9 t
Fetching Cassandra updates...
Current cluster is now: t
12:53 PM:~$ ccm populate -n 5
12:54 PM:~$ ccm start
12:54 PM:~$ ccm node1 stress
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
24994,2499,2499,9.5,55.2,179.0,10
103123,7812,7812,2.8,27.2,134.7,20
236358,13323,13323,1.7,15.4,134.7,30
329477,9311,9311,1.7,9.8,109.8,40
405667,7619,7619,1.8,9.2,6591.9,50
558989,15332,15332,1.5,6.6,6591.1,60
^C12:55 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.2
 127.0.0.5
 127.0.0.4

cqlsh
12:55 PM:~$ ccm node2 decommission
12:57 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.5
 127.0.0.4

cqlsh
12:58 PM:~$
{code}

All nodes show the same peers table.


was (Author: enigmacurry):
First attempt appears to work correctly on cassandra-2.0 HEAD and 1.2.9 : 

{code}
12:53 PM:~$ ccm create -v git:cassandra-1.2.9 t
Fetching Cassandra updates...
Current cluster is now: t
12:53 PM:~$ ccm populate -n 5
12:54 PM:~$ ccm start
12:54 PM:~$ ccm node1 stress
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
24994,2499,2499,9.5,55.2,179.0,10
103123,7812,7812,2.8,27.2,134.7,20
236358,13323,13323,1.7,15.4,134.7,30
329477,9311,9311,1.7,9.8,109.8,40
405667,7619,7619,1.8,9.2,6591.9,50
558989,15332,15332,1.5,6.6,6591.1,60
^C12:55 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.2
 127.0.0.5
 127.0.0.4

cqlsh
12:55 PM:~$ ccm node2 decommission
12:57 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.5
 127.0.0.4

cqlsh
12:58 PM:~$
{code}

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851971#comment-13851971
 ] 

Ryan McGuire edited comment on CASSANDRA-6053 at 12/18/13 6:16 PM:
---

First attempt appears to work correctly on cassandra-2.0 HEAD and 1.2.9 : 

{code}
12:53 PM:~$ ccm create -v git:cassandra-1.2.9 t
Fetching Cassandra updates...
Current cluster is now: t
12:53 PM:~$ ccm populate -n 5
12:54 PM:~$ ccm start
12:54 PM:~$ ccm node1 stress
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
24994,2499,2499,9.5,55.2,179.0,10
103123,7812,7812,2.8,27.2,134.7,20
236358,13323,13323,1.7,15.4,134.7,30
329477,9311,9311,1.7,9.8,109.8,40
405667,7619,7619,1.8,9.2,6591.9,50
558989,15332,15332,1.5,6.6,6591.1,60
^C12:55 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.2
 127.0.0.5
 127.0.0.4

cqlsh
12:55 PM:~$ ccm node2 decommission
12:57 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.5
 127.0.0.4

cqlsh
12:58 PM:~$
{code}

All nodes show equivalent peers table.


was (Author: enigmacurry):
First attempt appears to work correctly on cassandra-2.0 HEAD and 1.2.9 : 

{code}
12:53 PM:~$ ccm create -v git:cassandra-1.2.9 t
Fetching Cassandra updates...
Current cluster is now: t
12:53 PM:~$ ccm populate -n 5
12:54 PM:~$ ccm start
12:54 PM:~$ ccm node1 stress
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
24994,2499,2499,9.5,55.2,179.0,10
103123,7812,7812,2.8,27.2,134.7,20
236358,13323,13323,1.7,15.4,134.7,30
329477,9311,9311,1.7,9.8,109.8,40
405667,7619,7619,1.8,9.2,6591.9,50
558989,15332,15332,1.5,6.6,6591.1,60
^C12:55 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.2
 127.0.0.5
 127.0.0.4

cqlsh
12:55 PM:~$ ccm node2 decommission
12:57 PM:~$ ccm node1 cqlsh
Connected to t at 127.0.0.1:9160.
[cqlsh 3.1.7 | Cassandra 1.2.9-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.0]
Use HELP for help.
cqlsh select peer from system.peers;

 peer
---
 127.0.0.3
 127.0.0.5
 127.0.0.4

cqlsh
12:58 PM:~$
{code}

All nodes show the same peers table.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Brandon Williams
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851991#comment-13851991
 ] 

Sylvain Lebresne commented on CASSANDRA-5745:
-

bq. you have a pretty narrow window of either the result must fit in L, or you 
must include all overlaps from L+1.

You're right, forgot about that.

bq.  first try just dropping tombstones from the sstable by itself, then if 
that doesn't get it below the threshold merge it into L+1.

Sounds reasonable. I'll note too that we do estimate for tombstone compaction 
how many tombstones won't likely be purgeable due to overlapping other 
sstables. It's a rough estimate tbh (which actually makes me wonder how often 
tombstone compaction do kick in for sstable that are not on the last level), 
but if we can improve on that (with minhash maybe?), we could estimate if it's 
worth merging with L+1 right away instead of having to compact the sstable 
alone first.

bq. We're seeing this on STCS as well. Any ideas for how to handle it there?

The improvement to the heuristic of tombstone compactions we're talking about 
should relatively simply extend to STCS. We focus on LCS mostly because it's 
actually more complicated there since you have the constraint that you can't 
compact any 2 sstables together or you might break the leveling.

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
 Fix For: 2.0.4


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851994#comment-13851994
 ] 

Ryan McGuire commented on CASSANDRA-6053:
-

OK, reproduced this by killing -9 one of the nodes and then doing a 'nodetool 
removenode':

{code}
01:20 PM:~$ kill -9 18961(PID of node1)
01:21 PM:~$ ccm node1 status
Failed to connect to '127.0.0.1:7100': Connection refused
01:21 PM:~$ ccm node2 status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Owns   Host ID   Token
Rack
DN  127.0.0.1  62.93 KB   20.0%  896644af-8640-4be6-a3ff-e8ed559d851c  
-9223372036854775808 rack1
UN  127.0.0.2  51.17 KB   20.0%  d3801466-d36d-428c-b4e5-05ff69fe36c0  
-5534023222112865485 rack1
UN  127.0.0.3  62.78 KB   20.0%  cb36c3ad-df45-4f77-bff5-ca93c504ec08  
-1844674407370955162 rack1
UN  127.0.0.4  51.17 KB   20.0%  89031a05-a3f6-4ac7-9d29-6caa0c609dbc  
1844674407370955161  rack1
UN  127.0.0.5  51.27 KB   20.0%  4909d856-a86e-493a-a7d0-7570d71eb9d8  
5534023222112865484  rack1

# Issue removenode on node3 :
01:21 PM:~$ ~/.ccm/t/node1/bin/nodetool -p 7300 removenode 
896644af-8640-4be6-a3ff-e8ed559d851c

01:22 PM:~$ ccm node3 cqlsh
Connected to t at 127.0.0.3:9160.
[cqlsh 4.1.0 | Cassandra 2.0.3-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh select * from system.peers;

 peer  | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens
---+-+--+--+---+-+-+--+--
 127.0.0.2 | datacenter1 | d3801466-d36d-428c-b4e5-05ff69fe36c0 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.2 | d133398f-f287-3674-83af-a1b04ee29f1f 
| {'-5534023222112865485'}
 127.0.0.5 | datacenter1 | 4909d856-a86e-493a-a7d0-7570d71eb9d8 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.5 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'5534023222112865484'}
 127.0.0.4 | datacenter1 | 89031a05-a3f6-4ac7-9d29-6caa0c609dbc | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.4 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'1844674407370955161'}

(3 rows)

# Check node2 peers table:

01:23 PM:~$ ccm node2 cqlsh
Connected to t at 127.0.0.2:9160.
[cqlsh 4.1.0 | Cassandra 2.0.3-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh select * from system.peers;

 peer  | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens
---+-+--+--+---+-+-+--+--
 127.0.0.3 | datacenter1 | cb36c3ad-df45-4f77-bff5-ca93c504ec08 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.3 | d133398f-f287-3674-83af-a1b04ee29f1f 
| {'-1844674407370955162'}
 127.0.0.1 |null | 896644af-8640-4be6-a3ff-e8ed559d851c | null 
|  null |null |   127.0.0.1 | null 
| null
 127.0.0.5 | datacenter1 | 4909d856-a86e-493a-a7d0-7570d71eb9d8 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.5 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'5534023222112865484'}
 127.0.0.4 | datacenter1 | 89031a05-a3f6-4ac7-9d29-6caa0c609dbc | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.4 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'1844674407370955161'}

(4 rows)

01:23 PM:~$ ccm node2 status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Owns   Host ID   Token
Rack
UN  127.0.0.2  51.17 KB   40.0%  d3801466-d36d-428c-b4e5-05ff69fe36c0  
-5534023222112865485 rack1
UN  127.0.0.3  62.78 KB   20.0%  cb36c3ad-df45-4f77-bff5-ca93c504ec08  
-1844674407370955162 rack1
UN  127.0.0.4  51.17 KB   20.0%  89031a05-a3f6-4ac7-9d29-6caa0c609dbc  
1844674407370955161  rack1
UN  127.0.0.5  51.27 KB   20.0%  4909d856-a86e-493a-a7d0-7570d71eb9d8  
5534023222112865484  rack1

{code}

By issuing the removenode on node3, node3 seems to know about the node being 
removed and it's peers table is correct. node2, although it's status output 
shows node1 going away, it's peers table has not been updated.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

[jira] [Commented] (CASSANDRA-6395) Add the ability to query by TimeUUIDs with milisecond granularity

2013-12-18 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852004#comment-13852004
 ] 

Tyler Hobbs commented on CASSANDRA-6395:


Also, the CHANGES line is incorrect.  It should be Add _millisecond_ precision 
formats to the timestamp parser (CASSANDRA-6395), not _sub-ms_.

 Add the ability to query by TimeUUIDs with milisecond granularity
 -

 Key: CASSANDRA-6395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6395
 Project: Cassandra
  Issue Type: New Feature
Reporter: Lorcan Coyle
Assignee: Lorcan Coyle
Priority: Minor
  Labels: lhf
 Fix For: 2.0.4


 Currently it is impossible to query for dates with the minTimeuuid and 
 maxTimeuuid functions with sub-second accuracy from cqlsh because the parser 
 doesn't recognise dates formatted at that granularity (e.g., 2013-09-30 
 22:19:06.591). By adding the following ISO8601 patterns to 
 TimestampSerializer this functionality is unlocked:
 -MM-dd HH:mm:ss.SSS,
 -MM-dd HH:mm:ss.SSSZ,
 -MM-dd'T'HH:mm:ss.SSS,
 -MM-dd'T'HH:mm:ss.SSSZ.
 I submitted this as a pull-request on the github mirror 
 (https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
 submit a patch to address this here.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851994#comment-13851994
 ] 

Ryan McGuire edited comment on CASSANDRA-6053 at 12/18/13 6:52 PM:
---

OK, reproduced this by killing -9 one of the nodes and then doing a 'nodetool 
removenode':

{code}
01:20 PM:~$ kill -9 18961(PID of node1)
01:21 PM:~$ ccm node1 status
Failed to connect to '127.0.0.1:7100': Connection refused
01:21 PM:~$ ccm node2 status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Owns   Host ID   Token
Rack
DN  127.0.0.1  62.93 KB   20.0%  896644af-8640-4be6-a3ff-e8ed559d851c  
-9223372036854775808 rack1
UN  127.0.0.2  51.17 KB   20.0%  d3801466-d36d-428c-b4e5-05ff69fe36c0  
-5534023222112865485 rack1
UN  127.0.0.3  62.78 KB   20.0%  cb36c3ad-df45-4f77-bff5-ca93c504ec08  
-1844674407370955162 rack1
UN  127.0.0.4  51.17 KB   20.0%  89031a05-a3f6-4ac7-9d29-6caa0c609dbc  
1844674407370955161  rack1
UN  127.0.0.5  51.27 KB   20.0%  4909d856-a86e-493a-a7d0-7570d71eb9d8  
5534023222112865484  rack1

# Issue removenode on node3 :
01:21 PM:~$ ~/.ccm/t/node1/bin/nodetool -p 7300 removenode 
896644af-8640-4be6-a3ff-e8ed559d851c

01:22 PM:~$ ccm node3 cqlsh
Connected to t at 127.0.0.3:9160.
[cqlsh 4.1.0 | Cassandra 2.0.3-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh select * from system.peers;

 peer  | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens
---+-+--+--+---+-+-+--+--
 127.0.0.2 | datacenter1 | d3801466-d36d-428c-b4e5-05ff69fe36c0 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.2 | d133398f-f287-3674-83af-a1b04ee29f1f 
| {'-5534023222112865485'}
 127.0.0.5 | datacenter1 | 4909d856-a86e-493a-a7d0-7570d71eb9d8 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.5 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'5534023222112865484'}
 127.0.0.4 | datacenter1 | 89031a05-a3f6-4ac7-9d29-6caa0c609dbc | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.4 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'1844674407370955161'}

(3 rows)

# Check node2 peers table:

01:23 PM:~$ ccm node2 cqlsh
Connected to t at 127.0.0.2:9160.
[cqlsh 4.1.0 | Cassandra 2.0.3-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh select * from system.peers;

 peer  | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens
---+-+--+--+---+-+-+--+--
 127.0.0.3 | datacenter1 | cb36c3ad-df45-4f77-bff5-ca93c504ec08 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.3 | d133398f-f287-3674-83af-a1b04ee29f1f 
| {'-1844674407370955162'}
 127.0.0.1 |null | 896644af-8640-4be6-a3ff-e8ed559d851c | null 
|  null |null |   127.0.0.1 | null 
| null
 127.0.0.5 | datacenter1 | 4909d856-a86e-493a-a7d0-7570d71eb9d8 | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.5 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'5534023222112865484'}
 127.0.0.4 | datacenter1 | 89031a05-a3f6-4ac7-9d29-6caa0c609dbc | null 
| rack1 |  2.0.3-SNAPSHOT |   127.0.0.4 | d133398f-f287-3674-83af-a1b04ee29f1f 
|  {'1844674407370955161'}

(4 rows)

# oh noes!... node2 still has an entry for node1 in peers table.

01:23 PM:~$ ccm node2 status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Owns   Host ID   Token
Rack
UN  127.0.0.2  51.17 KB   40.0%  d3801466-d36d-428c-b4e5-05ff69fe36c0  
-5534023222112865485 rack1
UN  127.0.0.3  62.78 KB   20.0%  cb36c3ad-df45-4f77-bff5-ca93c504ec08  
-1844674407370955162 rack1
UN  127.0.0.4  51.17 KB   20.0%  89031a05-a3f6-4ac7-9d29-6caa0c609dbc  
1844674407370955161  rack1
UN  127.0.0.5  51.27 KB   20.0%  4909d856-a86e-493a-a7d0-7570d71eb9d8  
5534023222112865484  rack1

{code}

By issuing the removenode on node3, node3 seems to know about the node being 
removed and it's peers table is correct. node2, although it's status output 
shows node1 going away, it's peers table has not been 

[jira] [Created] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2013-12-18 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-6503:
--

 Summary: sstables from stalled repair sessions become live after a 
reboot and can resurrect deleted data
 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan


The sstables streamed in during a repair session don't become active until the 
session finishes.  If something causes the repair session to hang for some 
reason, those sstables will hang around until the next reboot, and become 
active then.  If you don't reboot for 3 months, this can cause data to 
resurrect, as GC grace has expired, so tombstones for the data in those 
sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2013-12-18 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852019#comment-13852019
 ] 

Jeremiah Jordan commented on CASSANDRA-6503:


One thing I was thinking that might help with this is if we could leave the 
sstables named -tmp until we are ready to make them become active.  That way 
when you reboot, any files hanging around will get removed on restart, instead 
of becoming active.

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan

 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


git commit: Update docs, bump CQL3 version, correct CHANGES.txt for CASSANDRA-6395

2013-12-18 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 1727ea773 - 21bb53146


Update docs, bump CQL3 version, correct CHANGES.txt for CASSANDRA-6395


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21bb5314
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21bb5314
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21bb5314

Branch: refs/heads/cassandra-2.0
Commit: 21bb5314603eb4b4f2b23aca82dca29ebfd7b4f4
Parents: 1727ea7
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Dec 18 22:07:13 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Dec 18 22:07:13 2013 +0300

--
 CHANGES.txt|  2 +-
 doc/cql3/CQL.textile   | 10 +-
 src/java/org/apache/cassandra/cql3/QueryProcessor.java |  2 +-
 3 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 10c9c33..b876204 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,7 +8,7 @@
  * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
  * Fix divide-by-zero in PCI (CASSANDRA-6403)
  * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
- * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
+ * Add millisecond precision formats to the timestamp parser (CASSANDRA-6395)
  * Expose a total memtable size metric for a CF (CASSANDRA-6391)
  * cqlsh: handle symlinks properly (CASSANDRA-6425)
  * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 047da68..f31c65a 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -1,6 +1,6 @@
 link rel=StyleSheet href=CQL.css type=text/css media=screen
 
-h1. Cassandra Query Language (CQL) v3.1.2
+h1. Cassandra Query Language (CQL) v3.1.3
 
 
  span id=tableOfContents
@@ -804,15 +804,19 @@ They can also be input as string literals in any of the 
following ISO 8601 forma
 
 * @2011-02-03 04:05+@
 * @2011-02-03 04:05:00+@
+* @2011-02-03 04:05:00.000+@
 * @2011-02-03T04:05+@
 * @2011-02-03T04:05:00+@
+* @2011-02-03T04:05:00.000+@
 
 The @+@ above is an RFC 822 4-digit time zone specification; @+@ 
refers to GMT. US Pacific Standard Time is @-0800@. The time zone may be 
omitted if desired-- the date will be interpreted as being in the time zone 
under which the coordinating Cassandra node is configured.
 
 * @2011-02-03 04:05@
 * @2011-02-03 04:05:00@
+* @2011-02-03 04:05:00.000@
 * @2011-02-03T04:05@
 * @2011-02-03T04:05:00@
+* @2011-02-03T04:05:00.000@
 
 There are clear difficulties inherent in relying on the time zone 
configuration being as expected, though, so it is recommended that the time 
zone always be specified for timestamps when feasible.
 
@@ -1098,6 +1102,10 @@ h2(#changes). Changes
 
 The following describes the addition/changes brought for each version of CQL.
 
+h3. 3.1.3
+
+* Millisecond precision formats have been added to the timestamp parser (see 
working with dates:#usingdates).
+
 h3. 3.1.2
 
 * @NaN@ and @Infinity@ has been added as valid float contants. They are now 
reserved keywords. In the unlikely case you we using them as a column 
identifier (or keyspace/table one), you will noew need to double quote them 
(see quote identifiers:#identifiers).

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 335da4b..ad3c4b4 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -44,7 +44,7 @@ import org.apache.cassandra.utils.SemanticVersion;
 
 public class QueryProcessor
 {
-public static final SemanticVersion CQL_VERSION = new 
SemanticVersion(3.1.2);
+public static final SemanticVersion CQL_VERSION = new 
SemanticVersion(3.1.3);
 
 private static final Logger logger = 
LoggerFactory.getLogger(QueryProcessor.class);
 private static final MemoryMeter meter = new MemoryMeter();



[jira] [Commented] (CASSANDRA-6395) Add the ability to query by TimeUUIDs with milisecond granularity

2013-12-18 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852032#comment-13852032
 ] 

Aleksey Yeschenko commented on CASSANDRA-6395:
--

Fixed in 21bb5314603eb4b4f2b23aca82dca29ebfd7b4f4.

 Add the ability to query by TimeUUIDs with milisecond granularity
 -

 Key: CASSANDRA-6395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6395
 Project: Cassandra
  Issue Type: New Feature
Reporter: Lorcan Coyle
Assignee: Lorcan Coyle
Priority: Minor
  Labels: lhf
 Fix For: 2.0.4


 Currently it is impossible to query for dates with the minTimeuuid and 
 maxTimeuuid functions with sub-second accuracy from cqlsh because the parser 
 doesn't recognise dates formatted at that granularity (e.g., 2013-09-30 
 22:19:06.591). By adding the following ISO8601 patterns to 
 TimestampSerializer this functionality is unlocked:
 -MM-dd HH:mm:ss.SSS,
 -MM-dd HH:mm:ss.SSSZ,
 -MM-dd'T'HH:mm:ss.SSS,
 -MM-dd'T'HH:mm:ss.SSSZ.
 I submitted this as a pull-request on the github mirror 
 (https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
 submit a patch to address this here.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2013-12-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852035#comment-13852035
 ] 

sankalp kohli commented on CASSANDRA-6157:
--

1) The reason I create a new map on changes is because snake yaml creates a non 
concurrent hash map when loading the yaml. I could not find how to tell it to 
create a concurrent map. So on future changes, I create a new concurrent map. 
If you have suggestions then let me know. 

2) I will fix the formatting(Damm IDE :( ) in my next patch once we finalize 
problems 1) and 3)

3) The reason of using HintedHandoffEnabledOverride is because it is an 
override. You can have hinted handoff turned off and can use this to enable 
handoff in one DC. But I am fine with using HintedHandoffPerDC if thats less 
confusing :)






 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.4

 Attachments: trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-18 Thread aleksey
Merge branch 'cassandra-2.0' into trunk

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1152e4b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1152e4b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1152e4b3

Branch: refs/heads/trunk
Commit: 1152e4b3924603106313130ca396e9d0027a6b5e
Parents: d365faa 21bb531
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Dec 18 22:10:39 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Dec 18 22:10:39 2013 +0300

--
 CHANGES.txt|  4 ++--
 doc/cql3/CQL.textile   | 10 +-
 src/java/org/apache/cassandra/cql3/QueryProcessor.java |  2 +-
 3 files changed, 12 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1152e4b3/CHANGES.txt
--
diff --cc CHANGES.txt
index 08e96fb,b876204..7074d2a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,24 -1,3 +1,25 @@@
 +2.1
 + * Multithreaded commitlog (CASSANDRA-3578)
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 + * Secondary index support for collections (CASSANDRA-4511)
 + * SSTable metadata(Stats.db) format change (CASSANDRA-6356)
 + * Push composites support in the storage engine (CASSANDRA-5417)
++ * Add snapshot space used to cfstats (CASSANDRA-6231)
 +
 +
  2.0.4
   * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
   * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1152e4b3/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--



[1/2] git commit: Update docs, bump CQL3 version, correct CHANGES.txt for CASSANDRA-6395

2013-12-18 Thread aleksey
Updated Branches:
  refs/heads/trunk d365faab0 - 1152e4b39


Update docs, bump CQL3 version, correct CHANGES.txt for CASSANDRA-6395


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21bb5314
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21bb5314
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21bb5314

Branch: refs/heads/trunk
Commit: 21bb5314603eb4b4f2b23aca82dca29ebfd7b4f4
Parents: 1727ea7
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Dec 18 22:07:13 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Dec 18 22:07:13 2013 +0300

--
 CHANGES.txt|  2 +-
 doc/cql3/CQL.textile   | 10 +-
 src/java/org/apache/cassandra/cql3/QueryProcessor.java |  2 +-
 3 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 10c9c33..b876204 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,7 +8,7 @@
  * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
  * Fix divide-by-zero in PCI (CASSANDRA-6403)
  * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
- * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
+ * Add millisecond precision formats to the timestamp parser (CASSANDRA-6395)
  * Expose a total memtable size metric for a CF (CASSANDRA-6391)
  * cqlsh: handle symlinks properly (CASSANDRA-6425)
  * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 047da68..f31c65a 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -1,6 +1,6 @@
 link rel=StyleSheet href=CQL.css type=text/css media=screen
 
-h1. Cassandra Query Language (CQL) v3.1.2
+h1. Cassandra Query Language (CQL) v3.1.3
 
 
  span id=tableOfContents
@@ -804,15 +804,19 @@ They can also be input as string literals in any of the 
following ISO 8601 forma
 
 * @2011-02-03 04:05+@
 * @2011-02-03 04:05:00+@
+* @2011-02-03 04:05:00.000+@
 * @2011-02-03T04:05+@
 * @2011-02-03T04:05:00+@
+* @2011-02-03T04:05:00.000+@
 
 The @+@ above is an RFC 822 4-digit time zone specification; @+@ 
refers to GMT. US Pacific Standard Time is @-0800@. The time zone may be 
omitted if desired-- the date will be interpreted as being in the time zone 
under which the coordinating Cassandra node is configured.
 
 * @2011-02-03 04:05@
 * @2011-02-03 04:05:00@
+* @2011-02-03 04:05:00.000@
 * @2011-02-03T04:05@
 * @2011-02-03T04:05:00@
+* @2011-02-03T04:05:00.000@
 
 There are clear difficulties inherent in relying on the time zone 
configuration being as expected, though, so it is recommended that the time 
zone always be specified for timestamps when feasible.
 
@@ -1098,6 +1102,10 @@ h2(#changes). Changes
 
 The following describes the addition/changes brought for each version of CQL.
 
+h3. 3.1.3
+
+* Millisecond precision formats have been added to the timestamp parser (see 
working with dates:#usingdates).
+
 h3. 3.1.2
 
 * @NaN@ and @Infinity@ has been added as valid float contants. They are now 
reserved keywords. In the unlikely case you we using them as a column 
identifier (or keyspace/table one), you will noew need to double quote them 
(see quote identifiers:#identifiers).

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21bb5314/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 335da4b..ad3c4b4 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -44,7 +44,7 @@ import org.apache.cassandra.utils.SemanticVersion;
 
 public class QueryProcessor
 {
-public static final SemanticVersion CQL_VERSION = new 
SemanticVersion(3.1.2);
+public static final SemanticVersion CQL_VERSION = new 
SemanticVersion(3.1.3);
 
 private static final Logger logger = 
LoggerFactory.getLogger(QueryProcessor.class);
 private static final MemoryMeter meter = new MemoryMeter();



[jira] [Updated] (CASSANDRA-6496) Endless L0 LCS compactions

2013-12-18 Thread Robert Coli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Coli updated CASSANDRA-6496:
---

Reproduced In:   (was: 2.0 beta 1)
Since Version: 2.0 beta 1

 Endless L0 LCS compactions
 --

 Key: CASSANDRA-6496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node
Reporter: Nikolai Grigoriev
Assignee: Jonathan Ellis
  Labels: compaction
 Fix For: 2.0.4

 Attachments: 6496.txt, system.log.1.gz, system.log.gz


 I have first described the problem here: 
 http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic
 I think I have really abused my system with the traffic (mix of reads, heavy 
 updates and some deletes). Now after stopping the traffic I see the 
 compactions that are going on endlessly for over 4 days.
 For a specific CF I have about 4700 sstable data files right now.  The 
 compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. 
 sstable_size_in_mb=256.  3214 files are about 256Mb (+/1 few megs), other 
 files are smaller or much smaller than that. No sstables are larger than 
 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them 
 into 32 sstables of approximately the same size. So, what my system is doing 
 for last 4 days (no traffic at all) is compacting groups of 32 sstables into 
 groups of 32 sstables without any changes. Seems like a bug to me regardless 
 of what did I do to get the system into this state...



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6496) Endless L0 LCS compactions

2013-12-18 Thread Robert Coli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Coli updated CASSANDRA-6496:
---

Reproduced In: 2.0 beta 1

 Endless L0 LCS compactions
 --

 Key: CASSANDRA-6496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6496
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node
Reporter: Nikolai Grigoriev
Assignee: Jonathan Ellis
  Labels: compaction
 Fix For: 2.0.4

 Attachments: 6496.txt, system.log.1.gz, system.log.gz


 I have first described the problem here: 
 http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic
 I think I have really abused my system with the traffic (mix of reads, heavy 
 updates and some deletes). Now after stopping the traffic I see the 
 compactions that are going on endlessly for over 4 days.
 For a specific CF I have about 4700 sstable data files right now.  The 
 compaction estimates are logged as [3312, 4, 0, 0, 0, 0, 0, 0, 0]. 
 sstable_size_in_mb=256.  3214 files are about 256Mb (+/1 few megs), other 
 files are smaller or much smaller than that. No sstables are larger than 
 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them 
 into 32 sstables of approximately the same size. So, what my system is doing 
 for last 4 days (no traffic at all) is compacting groups of 32 sstables into 
 groups of 32 sstables without any changes. Seems like a bug to me regardless 
 of what did I do to get the system into this state...



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2013-12-18 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-6503:
--

Fix Version/s: 1.2.14
 Assignee: Yuki Morishita

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
 Fix For: 1.2.14


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852089#comment-13852089
 ] 

Mikhail Stepura commented on CASSANDRA-6378:


LGTM 

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (CASSANDRA-6482) Add junitreport to ant test target

2013-12-18 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-6482.
---

Resolution: Invalid

Using build/test/output/TEST-*.xml is sufficient.

 Add junitreport to ant test target
 --

 Key: CASSANDRA-6482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6482
 Project: Cassandra
  Issue Type: Improvement
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor

 Adding junitreport XML output for the unit tests will allow detailed 
 reporting and historical tracking in Jenkins.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-4914) Aggregate functions in CQL

2013-12-18 Thread Brian ONeill (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852132#comment-13852132
 ] 

Brian ONeill edited comment on CASSANDRA-4914 at 12/18/13 8:34 PM:
---

Per:
http://www.mail-archive.com/user@cassandra.apache.org/msg33816.html

Is there any interest in extending this functionality to pre-compute 
aggregations?  Basically, enhance the metadata so users can declare which 
aggregations they want on a table, and along which dimensions.  Then, maintain 
those aggregations in separate CFs as part of the flush/compaction processes 
(and extend CQL to allow them to be queried).

My feel is that should go under a separate issue. Honestly, I'm just trying to 
gauge interest for a low-level feature. (akin to secondary indexes)
If there is no interest, I'll just about my business and implement this at the 
app layer.  


was (Author: boneill42):
Per:
http://www.mail-archive.com/user@cassandra.apache.org/msg33816.html

Is there any interest in extending this functionality to pre-compute 
aggregations?  Basically, enhance the metadata so users can declare which which 
aggregations they'll want on a table, and along which dimensions.  Then, 
maintain those aggregations in separate CFs as part of the flush/compaction 
processes (and extend CQL to allow them to be queried).

My feel is that should go under a separate issue. Honestly, I'm just trying to 
gauge interest for a low-level feature. (akin to secondary indexes)
If there is no interest, I'll just about my business and implement this at the 
app layer.  

 Aggregate functions in CQL
 --

 Key: CASSANDRA-4914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4914
 Project: Cassandra
  Issue Type: New Feature
Reporter: Vijay
Assignee: Vijay
 Fix For: 2.1


 The requirement is to do aggregation of data in Cassandra (Wide row of column 
 values of int, double, float etc).
 With some basic agree gate functions like AVG, SUM, Mean, Min, Max, etc (for 
 the columns within a row).
 Example:
 SELECT * FROM emp WHERE empID IN (130) ORDER BY deptID DESC;  
   
  empid | deptid | first_name | last_name | salary
 ---+++---+
130 |  3 | joe| doe   |   10.1
130 |  2 | joe| doe   |100
130 |  1 | joe| doe   |  1e+03
  
 SELECT sum(salary), empid FROM emp WHERE empID IN (130);  
   
  sum(salary) | empid
 -+
1110.1|  130



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-4914) Aggregate functions in CQL

2013-12-18 Thread Brian ONeill (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852132#comment-13852132
 ] 

Brian ONeill commented on CASSANDRA-4914:
-

Per:
http://www.mail-archive.com/user@cassandra.apache.org/msg33816.html

Is there any interest in extending this functionality to pre-compute 
aggregations?  Basically, enhance the metadata so users can declare which which 
aggregations they'll want on a table, and along which dimensions.  Then, 
maintain those aggregations in separate CFs as part of the flush/compaction 
processes (and extend CQL to allow them to be queried).

My feel is that should go under a separate issue. Honestly, I'm just trying to 
gauge interest for a low-level feature. (akin to secondary indexes)
If there is no interest, I'll just about my business and implement this at the 
app layer.  

 Aggregate functions in CQL
 --

 Key: CASSANDRA-4914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4914
 Project: Cassandra
  Issue Type: New Feature
Reporter: Vijay
Assignee: Vijay
 Fix For: 2.1


 The requirement is to do aggregation of data in Cassandra (Wide row of column 
 values of int, double, float etc).
 With some basic agree gate functions like AVG, SUM, Mean, Min, Max, etc (for 
 the columns within a row).
 Example:
 SELECT * FROM emp WHERE empID IN (130) ORDER BY deptID DESC;  
   
  empid | deptid | first_name | last_name | salary
 ---+++---+
130 |  3 | joe| doe   |   10.1
130 |  2 | joe| doe   |100
130 |  1 | joe| doe   |  1e+03
  
 SELECT sum(salary), empid FROM emp WHERE empID IN (130);  
   
  sum(salary) | empid
 -+
1110.1|  130



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5745:
--

 Priority: Minor  (was: Major)
Fix Version/s: (was: 2.0.4)
   3.0
   Issue Type: Improvement  (was: Bug)

How badly is this affecting your STCS, [~nickmbailey]?  Because the minhash 
stuff almost certainly isn't going to drop before 3.0

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 3.0


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6053:
-

Assignee: Tyler Hobbs  (was: Brandon Williams)

Thanks, Ryan.

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Tyler Hobbs
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6053) system.peers table not updated after decommissioning nodes in C* 2.0

2013-12-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6053:
--

Reproduced In: 2.0.3

 system.peers table not updated after decommissioning nodes in C* 2.0
 

 Key: CASSANDRA-6053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Datastax AMI running EC2 m1.xlarge instances
Reporter: Guyon Moree
Assignee: Tyler Hobbs
 Attachments: peers


 After decommissioning my cluster from 20 to 9 nodes using opscenter, I found 
 all but one of the nodes had incorrect system.peers tables.
 This became a problem (afaik) when using the python-driver, since this 
 queries the peers table to set up its connection pool. Resulting in very slow 
 startup times, because of timeouts.
 The output of nodetool didn't seem to be affected. After removing the 
 incorrect entries from the peers tables, the connection issues seem to have 
 disappeared for us. 
 Would like some feedback on if this was the right way to handle the issue or 
 if I'm still left with a broken cluster.
 Attached is the output of nodetool status, which shows the correct 9 nodes. 
 Below that the output of the system.peers tables on the individual nodes.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-12-18 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852217#comment-13852217
 ] 

Nick Bailey commented on CASSANDRA-5745:


I've noticed it with OpsCenter's metrics column families. Specifically in this 
case on a 2 node cluster, where 1 node has not seen the issue and another has.

It seems like it could be fairly common in data models where data is only 
removed by TTL. I'm guessing that in this case a repair caused overlap between 
two sstables and now we are in a situation where we have an sstable large 
enough to be in its own bucket when doing minor compactions that is entirely 
tombstones, but won't get deleted by tombstone compaction due to this.

 Minor compaction tombstone-removal deadlock
 ---

 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 3.0


 From a discussion with Axel Liljencrantz,
 If you have two SSTables that have temporally overlapping data, you can get 
 lodged into a state where a compaction of SSTable A can't drop tombstones 
 because SSTable B contains older data *and vice versa*. Once that's happened, 
 Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
 with tombstone removal. The only way to break the wedge would be to perform a 
 compaction containing both SSTable A and SSTable B. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2013-12-18 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6503:
--


I don't think we should be messing with repair code in 1.2.14, not for 
something that it took 3 years for someone to run across.  Suggest targetting 
2.0.

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
 Fix For: 1.2.14


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5872) Bundle JNA

2013-12-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852231#comment-13852231
 ] 

Jonathan Ellis commented on CASSANDRA-5872:
---

Unless they refactored things for 4.0, we don't need -platform.

There's an existing dependency in build.xml for jna 3.2.7, that should probably 
be modified.

 Bundle JNA
 --

 Key: CASSANDRA-5872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5872
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.1


 JNA 4.0 is reported to be dual-licensed LGPL/APL.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[2/3] git commit: add client encryption support to sstableloader patch by Sam Tunnicliffe; reviewed by Mikhail Stepura for CASSANDRA-6378

2013-12-18 Thread jbellis
add client encryption support to sstableloader
patch by Sam Tunnicliffe; reviewed by Mikhail Stepura for CASSANDRA-6378


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b2a1903
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b2a1903
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b2a1903

Branch: refs/heads/trunk
Commit: 1b2a190379141094a986495bd1386e720786c9b7
Parents: 21bb531
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 16:17:13 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 16:17:13 2013 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 130 ++-
 2 files changed, 124 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b2a1903/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b876204..d6223be 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.4
+ * add client encryption support to sstableloader (CASSANDRA-6378)
  * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
  * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)
  * Fix assertion failure in filterColdSSTables (CASSANDRA-6483)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b2a1903/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index c89bb83..15c8df8 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -24,7 +24,9 @@ import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.base.Joiner;
 import com.google.common.collect.Sets;
+import org.apache.cassandra.config.EncryptionOptions;
 import org.apache.commons.cli.*;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.protocol.TProtocol;
@@ -58,12 +60,21 @@ public class BulkLoader
 private static final String USER_OPTION = username;
 private static final String PASSWD_OPTION = password;
 private static final String THROTTLE_MBITS = throttle;
+private static final String TRANSPORT_FACTORY = transport-factory;
+private static final String SSL_TRUSTSTORE = truststore;
+private static final String SSL_TRUSTSTORE_PW = truststore-password;
+private static final String SSL_KEYSTORE = keystore;
+private static final String SSL_KEYSTORE_PW = keystore-password;
+private static final String SSL_PROTOCOL = ssl-protocol;
+private static final String SSL_ALGORITHM = ssl-alg;
+private static final String SSL_STORE_TYPE = store-type;
+private static final String SSL_CIPHER_SUITES = ssl-ciphers;
 
 public static void main(String args[])
 {
 LoaderOptions options = LoaderOptions.parseArgs(args);
 OutputHandler handler = new 
OutputHandler.SystemOutput(options.verbose, options.debug);
-SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd), 
handler);
+SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd, 
options.transportFactory), handler);
 
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(options.throttle);
 StreamResultFuture future = loader.stream(options.ignores);
 future.addEventListener(new ProgressIndicator());
@@ -175,14 +186,16 @@ public class BulkLoader
 private final int rpcPort;
 private final String user;
 private final String passwd;
+private final ITransportFactory transportFactory;
 
-public ExternalClient(SetInetAddress hosts, int port, String user, 
String passwd)
+public ExternalClient(SetInetAddress hosts, int port, String user, 
String passwd, ITransportFactory transportFactory)
 {
 super();
 this.hosts = hosts;
 this.rpcPort = port;
 this.user = user;
 this.passwd = passwd;
+this.transportFactory = transportFactory;
 }
 
 public void init(String keyspace)
@@ -194,7 +207,7 @@ public class BulkLoader
 {
 // Query endpoint to ranges map and schemas from thrift
 InetAddress host = hostiter.next();
-Cassandra.Client client = 
createThriftClient(host.getHostAddress(), rpcPort, this.user, this.passwd);
+   

[1/3] git commit: add client encryption support to sstableloader patch by Sam Tunnicliffe; reviewed by Mikhail Stepura for CASSANDRA-6378

2013-12-18 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 21bb53146 - 1b2a19037
  refs/heads/trunk 1152e4b39 - 2e4d709d1


add client encryption support to sstableloader
patch by Sam Tunnicliffe; reviewed by Mikhail Stepura for CASSANDRA-6378


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b2a1903
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b2a1903
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b2a1903

Branch: refs/heads/cassandra-2.0
Commit: 1b2a190379141094a986495bd1386e720786c9b7
Parents: 21bb531
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 16:17:13 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 16:17:13 2013 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 130 ++-
 2 files changed, 124 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b2a1903/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b876204..d6223be 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.4
+ * add client encryption support to sstableloader (CASSANDRA-6378)
  * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
  * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)
  * Fix assertion failure in filterColdSSTables (CASSANDRA-6483)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b2a1903/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index c89bb83..15c8df8 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -24,7 +24,9 @@ import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.base.Joiner;
 import com.google.common.collect.Sets;
+import org.apache.cassandra.config.EncryptionOptions;
 import org.apache.commons.cli.*;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.protocol.TProtocol;
@@ -58,12 +60,21 @@ public class BulkLoader
 private static final String USER_OPTION = username;
 private static final String PASSWD_OPTION = password;
 private static final String THROTTLE_MBITS = throttle;
+private static final String TRANSPORT_FACTORY = transport-factory;
+private static final String SSL_TRUSTSTORE = truststore;
+private static final String SSL_TRUSTSTORE_PW = truststore-password;
+private static final String SSL_KEYSTORE = keystore;
+private static final String SSL_KEYSTORE_PW = keystore-password;
+private static final String SSL_PROTOCOL = ssl-protocol;
+private static final String SSL_ALGORITHM = ssl-alg;
+private static final String SSL_STORE_TYPE = store-type;
+private static final String SSL_CIPHER_SUITES = ssl-ciphers;
 
 public static void main(String args[])
 {
 LoaderOptions options = LoaderOptions.parseArgs(args);
 OutputHandler handler = new 
OutputHandler.SystemOutput(options.verbose, options.debug);
-SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd), 
handler);
+SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd, 
options.transportFactory), handler);
 
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(options.throttle);
 StreamResultFuture future = loader.stream(options.ignores);
 future.addEventListener(new ProgressIndicator());
@@ -175,14 +186,16 @@ public class BulkLoader
 private final int rpcPort;
 private final String user;
 private final String passwd;
+private final ITransportFactory transportFactory;
 
-public ExternalClient(SetInetAddress hosts, int port, String user, 
String passwd)
+public ExternalClient(SetInetAddress hosts, int port, String user, 
String passwd, ITransportFactory transportFactory)
 {
 super();
 this.hosts = hosts;
 this.rpcPort = port;
 this.user = user;
 this.passwd = passwd;
+this.transportFactory = transportFactory;
 }
 
 public void init(String keyspace)
@@ -194,7 +207,7 @@ public class BulkLoader
 {
 // Query endpoint to ranges map and schemas from thrift
 InetAddress host = hostiter.next();
- 

[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-18 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e4d709d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e4d709d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e4d709d

Branch: refs/heads/trunk
Commit: 2e4d709d191a42a61d460796d71408516054b77c
Parents: 1152e4b 1b2a190
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 16:17:20 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 16:17:20 2013 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 130 ++-
 2 files changed, 124 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e4d709d/CHANGES.txt
--
diff --cc CHANGES.txt
index 7074d2a,d6223be..0184c40
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,26 -1,5 +1,27 @@@
 +2.1
 + * Multithreaded commitlog (CASSANDRA-3578)
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 + * Secondary index support for collections (CASSANDRA-4511)
 + * SSTable metadata(Stats.db) format change (CASSANDRA-6356)
 + * Push composites support in the storage engine (CASSANDRA-5417)
 + * Add snapshot space used to cfstats (CASSANDRA-6231)
 +
 +
  2.0.4
+  * add client encryption support to sstableloader (CASSANDRA-6378)
   * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
   * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)
   * Fix assertion failure in filterColdSSTables (CASSANDRA-6483)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e4d709d/src/java/org/apache/cassandra/tools/BulkLoader.java
--



[jira] [Reopened] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reopened CASSANDRA-6378:
---


cassandra-2.0 and trunk both fail to build in the same manner:
{code}
build-project:
 [echo] apache-cassandra: /home/mshuler/git/cassandra/build.xml
[javac] Compiling 43 source files to 
/home/mshuler/git/cassandra/build/classes/thrift
[javac] Note: 
/home/mshuler/git/cassandra/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] Compiling 847 source files to 
/home/mshuler/git/cassandra/build/classes/main
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/NativeAllocator.java:22:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] import sun.misc.Unsafe;
[javac]^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/utils/FastByteComparisons.java:25:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] import sun.misc.Unsafe;
[javac]^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/sstable/IndexSummary.java:20:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] import java.io.Closeable;
[javac] ^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/Memory.java:29:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] private static final Unsafe unsafe = NativeAllocator.unsafe;
[javac]  ^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/NativeAllocator.java:26:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] static final Unsafe unsafe;
[javac]  ^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/NativeAllocator.java:31:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] Field field = 
sun.misc.Unsafe.class.getDeclaredField(theUnsafe);
[javac]   ^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/io/util/NativeAllocator.java:33:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
[javac] unsafe = (sun.misc.Unsafe) field.get(null);
[javac]   ^
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:461:
 error: cannot find symbol
[javac] if 
(transportFactory.supportedOptions().contains(SSLTransportFactory.TRUSTSTORE))
[javac]  ^
[javac]   symbol:   variable SSLTransportFactory
[javac]   location: class LoaderOptions
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:462:
 error: cannot find symbol
[javac] options.put(SSLTransportFactory.TRUSTSTORE, 
opts.encOptions.truststore);
[javac] ^
[javac]   symbol:   variable SSLTransportFactory
[javac]   location: class LoaderOptions
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:463:
 error: cannot find symbol
[javac] if 
(transportFactory.supportedOptions().contains(SSLTransportFactory.TRUSTSTORE_PASSWORD))
[javac]  ^
[javac]   symbol:   variable SSLTransportFactory
[javac]   location: class LoaderOptions
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:464:
 error: cannot find symbol
[javac] 
options.put(SSLTransportFactory.TRUSTSTORE_PASSWORD, 
opts.encOptions.truststore_password);
[javac] ^
[javac]   symbol:   variable SSLTransportFactory
[javac]   location: class LoaderOptions
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:465:
 error: cannot find symbol
[javac] if 
(transportFactory.supportedOptions().contains(SSLTransportFactory.PROTOCOL))
[javac]  ^
[javac]   symbol:   variable SSLTransportFactory
[javac]   location: class LoaderOptions
[javac] 
/home/mshuler/git/cassandra/src/java/org/apache/cassandra/tools/BulkLoader.java:466:
 error: cannot find symbol
[javac] 

[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852268#comment-13852268
 ] 

Mikhail Stepura commented on CASSANDRA-6378:


The entire {{SSLTransportFactory.java}} is missed in the commit

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6210) Repair hangs when a new datacenter is added to a cluster

2013-12-18 Thread Russell Alexander Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852313#comment-13852313
 ] 

Russell Alexander Spitzer commented on CASSANDRA-6210:
--

Repair running on this node
{code}
 INFO [AntiEntropyStage:1] 2013-12-18 22:39:28,209 StreamResultFuture.java 
(line 82) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Executing streaming 
plan for Repair
 INFO [AntiEntropyStage:1] 2013-12-18 22:39:28,209 StreamResultFuture.java 
(line 86) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Beginning stream 
session with /10.171.81.22
DEBUG [StreamConnectionEstablisher:2] 2013-12-18 22:39:28,210 
ConnectionHandler.java (line 78) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] 
Sending stream init for incoming stream
DEBUG [StreamConnectionEstablisher:2] 2013-12-18 22:39:28,211 
ConnectionHandler.java (line 84) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] 
Sending stream init for outgoing stream
DEBUG [STREAM-OUT-/10.171.81.22] 2013-12-18 22:39:28,212 ConnectionHandler.java 
(line 356) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Sending Prepare (1 
requests,  2 files}
{code}

On requested node
{code}
DEBUG [STREAM-IN-/10.172.27.174] 2013-12-18 22:39:28,296 ConnectionHandler.java 
(line 292) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Received Prepare (1 
requests,  2 files}
ERROR [STREAM-IN-/10.172.27.174] 2013-12-18 22:39:28,314 StreamSession.java 
(line 410) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Streaming error 
occurred
java.lang.NullPointerException
at 
org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174)
at 
org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436)
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358)
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293)
at java.lang.Thread.run(Thread.java:724)
DEBUG [STREAM-IN-/10.172.27.174] 2013-12-18 22:39:28,316 ConnectionHandler.java 
(line 153) [Stream #40d875d0-6835-11e3-a172-3729e500a0e7] Closing stream 
connection handler on /10.172.27.174
 INFO [STREAM-IN-/10.172.27.174] 2013-12-18 22:39:28,317 
StreamResultFuture.java (line 181) [Stream 
#40d875d0-6835-11e3-a172-3729e500a0e7] Session with /10.172.27.174 is complete
 WARN [STREAM-IN-/10.172.27.174] 2013-12-18 22:39:28,317 
StreamResultFuture.java (line 210) [Stream 
#40d875d0-6835-11e3-a172-3729e500a0e7] Stream failed
{code}

 Repair hangs when a new datacenter is added to a cluster
 

 Key: CASSANDRA-6210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Amazon Ec2
 2 M1.large nodes
Reporter: Russell Alexander Spitzer
Assignee: Yuki Morishita

 Attempting to add a new datacenter to a cluster seems to cause repair 
 operations to break. I've been reproducing this with 20~ node clusters but 
 can get it to reliably occur on 2 node setups.
 {code}
 ##Basic Steps to reproduce
 #Node 1 is started using GossipingPropertyFileSnitch as dc1
 #Cassandra-stress is used to insert a minimal amount of data
 $CASSANDRA_STRESS -t 100 -R 
 org.apache.cassandra.locator.NetworkTopologyStrategy  --num-keys=1000 
 --columns=10 --consistency-level=LOCAL_QUORUM --average-size-values -
 -compaction-strategy='LeveledCompactionStrategy' -O dc1:1 
 --operation=COUNTER_ADD
 #Alter Keyspace1
 ALTER KEYSPACE Keyspace1 WITH replication = {'class': 
 'NetworkTopologyStrategy', 'dc1': 1 , 'dc2': 1 };
 #Add node 2 using GossipingPropertyFileSnitch as dc2
 run repair on node 1
 run repair on node 2
 {code}
 The repair task on node 1 never completes and while there are no exceptions 
 in the logs of node1, netstat reports the following repair tasks
 {code}
 Mode: NORMAL
 Repair 4e71a250-36b4-11e3-bedc-1d1bb5c9abab
 Repair 6c64ded0-36b4-11e3-bedc-1d1bb5c9abab
 Read Repair Statistics:
 Attempted: 0
 Mismatch (Blocking): 0
 Mismatch (Background): 0
 Pool NameActive   Pending  Completed
 Commandsn/a 0  10239
 Responses   n/a 0   3839
 {code}
 Checking on node 2 we see the following exceptions
 {code}
 ERROR [STREAM-IN-/10.171.122.130] 2013-10-16 22:42:58,961 StreamSession.java 
 (line 410) [Stream #4e71a250-36b4-11e3-bedc-1d1bb5c9abab] Streaming error 
 occurred
 java.lang.NullPointerException
 at 
 org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:174)
 at 
 org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436)
 at 
 org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:358)
  

[jira] [Commented] (CASSANDRA-6373) describe_ring hangs with hsha thrift server

2013-12-18 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852322#comment-13852322
 ] 

Nick Bailey commented on CASSANDRA-6373:


[~xedin] any insight yet?

 describe_ring hangs with hsha thrift server
 ---

 Key: CASSANDRA-6373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6373
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Pavel Yaskevich
 Fix For: 2.0.4

 Attachments: describe_ring_failure.patch


 There is a strange bug with the thrift hsha server in 2.0 (we switched to 
 lmax disruptor server).
 The bug is that the first call to describe_ring from one connection will hang 
 indefinitely when the client is not connecting from localhost (or it at least 
 looks like the client is not on the same host). Additionally the cluster must 
 be using vnodes. When connecting from localhost the first call will work as 
 expected. And in either case subsequent calls from the same connection will 
 work as expected. According to git bisect the bad commit is the switch to the 
 lmax disruptor server:
 https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6
 I've attached the patch I used to reproduce the error in the unit tests. The 
 command to reproduce is: 
 {noformat}
 PYTHONPATH=test nosetests 
 --tests=system.test_thrift_server:TestMutations.test_describe_ring
 {noformat}
 I reproduced on ec2 and a single machine by having the server bind to the 
 private ip on ec2 and the client connect to the public ip (so it appears as 
 if the client is non local). I've also reproduced with two different vms 
 though.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-18 Thread Marcos Trama (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852345#comment-13852345
 ] 

Marcos Trama commented on CASSANDRA-6470:
-

I removed the blocked column from the query (indexed column) and now it 
works. This helps?

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-18 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/435f1b72
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/435f1b72
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/435f1b72

Branch: refs/heads/trunk
Commit: 435f1b72c6248625933efade3d9f8b6a301f31d9
Parents: 2e4d709 4a6f8a6
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 18:01:34 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 18:01:34 2013 -0600

--
 .../cassandra/thrift/SSLTransportFactory.java   | 86 
 1 file changed, 86 insertions(+)
--




[2/3] git commit: add SSLTransportFactory.java

2013-12-18 Thread jbellis
add SSLTransportFactory.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a6f8a66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a6f8a66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a6f8a66

Branch: refs/heads/trunk
Commit: 4a6f8a6610aacbe2c518bb6f8533ee5bdb943f41
Parents: 1b2a190
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 18:01:28 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 18:01:28 2013 -0600

--
 .../cassandra/thrift/SSLTransportFactory.java   | 86 
 1 file changed, 86 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a6f8a66/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
--
diff --git a/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java 
b/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
new file mode 100644
index 000..f828600
--- /dev/null
+++ b/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.thrift;
+
+import com.google.common.collect.Sets;
+import org.apache.cassandra.cli.transport.FramedTransportFactory;
+import org.apache.thrift.transport.TSSLTransportFactory;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+
+import java.util.Map;
+import java.util.Set;
+
+public class SSLTransportFactory implements ITransportFactory
+{
+public static final String TRUSTSTORE = enc.truststore;
+public static final String TRUSTSTORE_PASSWORD = enc.truststore.password;
+public static final String KEYSTORE = enc.keystore;
+public static final String KEYSTORE_PASSWORD = enc.keystore.password;
+public static final String PROTOCOL = enc.protocol;
+public static final String CIPHER_SUITES = enc.cipher.suites;
+public static final int SOCKET_TIMEOUT = 0;
+
+private static final SetString SUPPORTED_OPTIONS = 
Sets.newHashSet(TRUSTSTORE,
+ 
TRUSTSTORE_PASSWORD,
+ 
KEYSTORE,
+ 
KEYSTORE_PASSWORD,
+ 
PROTOCOL,
+ 
CIPHER_SUITES);
+
+private String truststore;
+private String truststorePassword;
+private String keystore;
+private String keystorePassword;
+private String protocol;
+private String[] cipherSuites;
+
+@Override
+public TTransport openTransport(String host, int port) throws Exception
+{
+TSSLTransportFactory.TSSLTransportParameters params = new 
TSSLTransportFactory.TSSLTransportParameters(protocol, cipherSuites);
+params.setTrustStore(truststore, truststorePassword);
+if (null != keystore)
+params.setKeyStore(keystore, keystorePassword);
+TTransport trans = TSSLTransportFactory.getClientSocket(host, port, 
SOCKET_TIMEOUT, params);
+return new FramedTransportFactory().getTransport(trans);
+}
+
+@Override
+public void setOptions(MapString, String options)
+{
+if (options.containsKey(TRUSTSTORE))
+truststore = options.get(TRUSTSTORE);
+if (options.containsKey(TRUSTSTORE_PASSWORD))
+truststorePassword = options.get(TRUSTSTORE_PASSWORD);
+if (options.containsKey(KEYSTORE))
+keystore = options.get(KEYSTORE);
+if (options.containsKey(KEYSTORE_PASSWORD))
+keystorePassword = options.get(KEYSTORE_PASSWORD);
+if (options.containsKey(PROTOCOL))
+protocol = options.get(PROTOCOL);
+if (options.containsKey(CIPHER_SUITES))
+cipherSuites = 

[1/3] git commit: add SSLTransportFactory.java

2013-12-18 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 1b2a19037 - 4a6f8a661
  refs/heads/trunk 2e4d709d1 - 435f1b72c


add SSLTransportFactory.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a6f8a66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a6f8a66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a6f8a66

Branch: refs/heads/cassandra-2.0
Commit: 4a6f8a6610aacbe2c518bb6f8533ee5bdb943f41
Parents: 1b2a190
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 18 18:01:28 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 18 18:01:28 2013 -0600

--
 .../cassandra/thrift/SSLTransportFactory.java   | 86 
 1 file changed, 86 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a6f8a66/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
--
diff --git a/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java 
b/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
new file mode 100644
index 000..f828600
--- /dev/null
+++ b/src/java/org/apache/cassandra/thrift/SSLTransportFactory.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.thrift;
+
+import com.google.common.collect.Sets;
+import org.apache.cassandra.cli.transport.FramedTransportFactory;
+import org.apache.thrift.transport.TSSLTransportFactory;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+
+import java.util.Map;
+import java.util.Set;
+
+public class SSLTransportFactory implements ITransportFactory
+{
+public static final String TRUSTSTORE = enc.truststore;
+public static final String TRUSTSTORE_PASSWORD = enc.truststore.password;
+public static final String KEYSTORE = enc.keystore;
+public static final String KEYSTORE_PASSWORD = enc.keystore.password;
+public static final String PROTOCOL = enc.protocol;
+public static final String CIPHER_SUITES = enc.cipher.suites;
+public static final int SOCKET_TIMEOUT = 0;
+
+private static final SetString SUPPORTED_OPTIONS = 
Sets.newHashSet(TRUSTSTORE,
+ 
TRUSTSTORE_PASSWORD,
+ 
KEYSTORE,
+ 
KEYSTORE_PASSWORD,
+ 
PROTOCOL,
+ 
CIPHER_SUITES);
+
+private String truststore;
+private String truststorePassword;
+private String keystore;
+private String keystorePassword;
+private String protocol;
+private String[] cipherSuites;
+
+@Override
+public TTransport openTransport(String host, int port) throws Exception
+{
+TSSLTransportFactory.TSSLTransportParameters params = new 
TSSLTransportFactory.TSSLTransportParameters(protocol, cipherSuites);
+params.setTrustStore(truststore, truststorePassword);
+if (null != keystore)
+params.setKeyStore(keystore, keystorePassword);
+TTransport trans = TSSLTransportFactory.getClientSocket(host, port, 
SOCKET_TIMEOUT, params);
+return new FramedTransportFactory().getTransport(trans);
+}
+
+@Override
+public void setOptions(MapString, String options)
+{
+if (options.containsKey(TRUSTSTORE))
+truststore = options.get(TRUSTSTORE);
+if (options.containsKey(TRUSTSTORE_PASSWORD))
+truststorePassword = options.get(TRUSTSTORE_PASSWORD);
+if (options.containsKey(KEYSTORE))
+keystore = options.get(KEYSTORE);
+if (options.containsKey(KEYSTORE_PASSWORD))
+keystorePassword = options.get(KEYSTORE_PASSWORD);
+if (options.containsKey(PROTOCOL))
+protocol = 

[jira] [Commented] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-18 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852351#comment-13852351
 ] 

Jonathan Ellis commented on CASSANDRA-6378:
---

fixed

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
Assignee: Sam Tunnicliffe
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4

 Attachments: 0001-CASSANDRA-6387-Add-SSL-support-to-BulkLoader.patch


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6373) describe_ring hangs with hsha thrift server

2013-12-18 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852463#comment-13852463
 ] 

Pavel Yaskevich commented on CASSANDRA-6373:


I had a look at the code to figure out why only that particular command is not 
working but no success this far, it would be very helpful if you could attach 
output of jstack of the server side taken in the situation when describe_ring 
hangs...

 describe_ring hangs with hsha thrift server
 ---

 Key: CASSANDRA-6373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6373
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Pavel Yaskevich
 Fix For: 2.0.4

 Attachments: describe_ring_failure.patch


 There is a strange bug with the thrift hsha server in 2.0 (we switched to 
 lmax disruptor server).
 The bug is that the first call to describe_ring from one connection will hang 
 indefinitely when the client is not connecting from localhost (or it at least 
 looks like the client is not on the same host). Additionally the cluster must 
 be using vnodes. When connecting from localhost the first call will work as 
 expected. And in either case subsequent calls from the same connection will 
 work as expected. According to git bisect the bad commit is the switch to the 
 lmax disruptor server:
 https://github.com/apache/cassandra/commit/98eec0a223251ecd8fec7ecc9e46b05497d631c6
 I've attached the patch I used to reproduce the error in the unit tests. The 
 command to reproduce is: 
 {noformat}
 PYTHONPATH=test nosetests 
 --tests=system.test_thrift_server:TestMutations.test_describe_ring
 {noformat}
 I reproduced on ec2 and a single machine by having the server bind to the 
 private ip on ec2 and the client connect to the public ip (so it appears as 
 if the client is non local). I've also reproduced with two different vms 
 though.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6504) counters++

2013-12-18 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6504:


 Summary: counters++
 Key: CASSANDRA-6504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6504
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.1


Continuing CASSANDRA-4775 here.

We are changing counter write path to explicitly 
lock-read-modify-unlock-replicate, thus getting rid of the previously used 
'local' (deltas) and 'remote' shards distinction. Unfortunately, we can't 
simply start using 'remote' shards exclusively, since shard merge rules 
prioritise the 'local' shards. Which is why we are introducing the third shard 
type - 'global', the only shard type to be used in 2.1+.

The updated merge rules are going to look like this:

global + global = keep the shard with the highest logical clock ({count, clock} 
pair will actually be replaced with {increment count, decrement count} tuple - 
see CASSANDRA-)
global + local or remote = keep the global one
local + local = sum counts (and logical clock)
local + remote = keep the local one
remote + remote = keep the shard with highest logical clock

This is required for backward compatibility with pre-2.1 counters. To make 
2.0-2.1 live upgrade possible, 'global' shard merge logic will have to be back 
ported to 2.0. 2.0 will not produce them, but will be able to understand the 
global shards coming from the 2.1 nodes during the live upgrade. See 
CASSANDRA-.

Other changes introduced in this issue:

1. replicate_on_write is gone. From now on we only avoid replication at RF 1.
2. REPLICATE_ON_WRITE stage is gone
3. counter mutations are running in their own COUNTER_MUTATION stage now
4. counter mutations have a separate counter_write_request_timeout setting
5. mergeAndRemoveOldShards() code is gone, for now, until/unless a better 
solution is found
6. we only replicate the fresh global shard now, not the complete (potentially 
quite large) counter context
7. to help with concurrency and reduce lock contention, we cache node's global 
shards in a new counter cache ({cf id, partition key, cell name} - {count, 
clock}). The cache is only used by counter writes, to help with 'hot' counters 
being simultaneously updated.

Improvements to be handled by separate JIRA issues:

1. Replace {count, clock} with {increment count, decrement count} tuple. When 
merging two global shards, the maximums of both will be picked. See 
CASSANDRA-. This goes into 2.1, and makes the new implementation match 
PN-Counters from the http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf 
white paper.
2. Split counter context into separate cells - one shard per cell. See 
CASSANDRA-. This goes into either 2.1 or 3.0.

Potential improvements still being debated:

1. Coalesce the mutations in COUNTER_MUTATION stage if they share the same 
partition key, and apply them together, to improve the locking situation when 
updating different counter cells in one partition. See CASSANDRA-. Will to 
into 2.1 or 3.0, if deemed beneficial.




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6505) counters++ global shards 2.0 back port

2013-12-18 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6505:


 Summary: counters++ global shards 2.0 back port
 Key: CASSANDRA-6505
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6505
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.4


CASSANDRA-6504 introduces a new type of shard - 'global' - to 2.1. To enable 
live upgrade from 2.0 to 2.1, it's necessary that 2.0 nodes are able to 
understand the new 'global' shards in the counter contexts.

2.0 nodes will not produce 'global' shards, but must contain the merge logic.

It isn't a trivial code change (non-trivial code in a non-trivial part of the 
code), hence this separate JIRA issue.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6506) counters++ split counter context shards into separate cells

2013-12-18 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6506:


 Summary: counters++ split counter context shards into separate 
cells
 Key: CASSANDRA-6506
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6506
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.1


This change is related to, but somewhat orthogonal to CASSANDRA-6504.

Currently all the shard tuples for a given counter cell are packed, in sorted 
order, in one binary blob. Thus reconciling N counter cells requires allocating 
a new byte buffer capable of holding the union of the two context's shards N-1 
times.

For writes, in post CASSANDRA-6504 world, it also means reading more data than 
we have to (the complete context, when all we need is the local node's global 
shard).

Splitting the context into separate cells, one cell per shard, will help to 
improve this. We did a similar thing with super columns for CASSANDRA-3237. 
Incidentally, doing this split is now possible thanks to CASSANDRA-3237.

Doing this would also simplify counter reconciliation logic. Getting rid of old 
contexts altogether can be done trivially with upgradesstables.




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6507) counters++ get rid of logical clock in global shards

2013-12-18 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6507:


 Summary: counters++ get rid of logical clock in global shards
 Key: CASSANDRA-6507
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6507
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1


In CASSANDRA-6504 the global shards still follow the {count, logical clock} 
pattern of the legacy shards. We could store the {increments, decrements} tuple 
in the shard instead, and for reconcile, instead of relying on the logical 
clock, pick the largest value of `increments` and `decrements` of the two 
shards, and use that.

E.g., shard1: {2000, 1001} (total 999), shard2: {2001, 1000} (total 1001). 
reconciled = {max(2000, 2001), max(1001, 1000)} = {2001, 1001} (total 1000).

While scenarios like this generally shouldn't happen post CASSANDRA-6504, this 
change costs us nothing, and makes issues like CASSANDRA-4417 theoretically 
impossible   . This also makes our new implementation directly follow the 
http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf white paper.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6508) counters++ coalesce counter mutations with the same partition key

2013-12-18 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6508:


 Summary: counters++ coalesce counter mutations with the same 
partition key
 Key: CASSANDRA-6508
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6508
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko


CASSANDRA-6504 counter caching helps with hot counter cells, but doesn't really 
help when we have a hot counter partition and different cells within being 
updated simultaneously by different clients (the striped locks are 
partition-level, not cell-level).

To improve performance in this scenario, we could coalesce the mutations in 
COUNTER_MUTATION stage if they share the same partition key + target cf, and 
apply them together, sharing a single lock.

If beneficial, this can go into 2.1.x or 3.0, doesn't have to be 2.1.0.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6504) counters++

2013-12-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6504:
-

Description: 
Continuing CASSANDRA-4775 here.

We are changing counter write path to explicitly 
lock-read-modify-unlock-replicate, thus getting rid of the previously used 
'local' (deltas) and 'remote' shards distinction. Unfortunately, we can't 
simply start using 'remote' shards exclusively, since shard merge rules 
prioritise the 'local' shards. Which is why we are introducing the third shard 
type - 'global', the only shard type to be used in 2.1+.

The updated merge rules are going to look like this:

global + global = keep the shard with the highest logical clock ({count, clock} 
pair will actually be replaced with {increment count, decrement count} tuple - 
see CASSANDRA-6507)
global + local or remote = keep the global one
local + local = sum counts (and logical clock)
local + remote = keep the local one
remote + remote = keep the shard with highest logical clock

This is required for backward compatibility with pre-2.1 counters. To make 
2.0-2.1 live upgrade possible, 'global' shard merge logic will have to be back 
ported to 2.0. 2.0 will not produce them, but will be able to understand the 
global shards coming from the 2.1 nodes during the live upgrade. See 
CASSANDRA-6505.

Other changes introduced in this issue:

1. replicate_on_write is gone. From now on we only avoid replication at RF 1.
2. REPLICATE_ON_WRITE stage is gone
3. counter mutations are running in their own COUNTER_MUTATION stage now
4. counter mutations have a separate counter_write_request_timeout setting
5. mergeAndRemoveOldShards() code is gone, for now, until/unless a better 
solution is found
6. we only replicate the fresh global shard now, not the complete (potentially 
quite large) counter context
7. to help with concurrency and reduce lock contention, we cache node's global 
shards in a new counter cache ({cf id, partition key, cell name} - {count, 
clock}). The cache is only used by counter writes, to help with 'hot' counters 
being simultaneously updated.

Improvements to be handled by separate JIRA issues:

1. Replace {count, clock} with {increment count, decrement count} tuple. When 
merging two global shards, the maximums of both will be picked. See 
CASSANDRA-6507. This goes into 2.1, and makes the new implementation match 
PN-Counters from the http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf 
white paper.
2. Split counter context into separate cells - one shard per cell. See 
CASSANDRA-6506. This goes into either 2.1 or 3.0.

Potential improvements still being debated:

1. Coalesce the mutations in COUNTER_MUTATION stage if they share the same 
partition key, and apply them together, to improve the locking situation when 
updating different counter cells in one partition. See CASSANDRA-6508. Will to 
into 2.1 or 3.0, if deemed beneficial.


  was:
Continuing CASSANDRA-4775 here.

We are changing counter write path to explicitly 
lock-read-modify-unlock-replicate, thus getting rid of the previously used 
'local' (deltas) and 'remote' shards distinction. Unfortunately, we can't 
simply start using 'remote' shards exclusively, since shard merge rules 
prioritise the 'local' shards. Which is why we are introducing the third shard 
type - 'global', the only shard type to be used in 2.1+.

The updated merge rules are going to look like this:

global + global = keep the shard with the highest logical clock ({count, clock} 
pair will actually be replaced with {increment count, decrement count} tuple - 
see CASSANDRA-)
global + local or remote = keep the global one
local + local = sum counts (and logical clock)
local + remote = keep the local one
remote + remote = keep the shard with highest logical clock

This is required for backward compatibility with pre-2.1 counters. To make 
2.0-2.1 live upgrade possible, 'global' shard merge logic will have to be back 
ported to 2.0. 2.0 will not produce them, but will be able to understand the 
global shards coming from the 2.1 nodes during the live upgrade. See 
CASSANDRA-.

Other changes introduced in this issue:

1. replicate_on_write is gone. From now on we only avoid replication at RF 1.
2. REPLICATE_ON_WRITE stage is gone
3. counter mutations are running in their own COUNTER_MUTATION stage now
4. counter mutations have a separate counter_write_request_timeout setting
5. mergeAndRemoveOldShards() code is gone, for now, until/unless a better 
solution is found
6. we only replicate the fresh global shard now, not the complete (potentially 
quite large) counter context
7. to help with concurrency and reduce lock contention, we cache node's global 
shards in a new counter cache ({cf id, partition key, cell name} - {count, 
clock}). The cache is only used by counter writes, to help with 'hot' counters 
being simultaneously updated.

Improvements to 

[jira] [Resolved] (CASSANDRA-4775) Counters 2.0

2013-12-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-4775.
--

Resolution: Duplicate

 Counters 2.0
 

 Key: CASSANDRA-4775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4775
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Arya Goudarzi
Assignee: Aleksey Yeschenko
  Labels: counters
 Fix For: 2.1


 The existing partitioned counters remain a source of frustration for most 
 users almost two years after being introduced.  The remaining problems are 
 inherent in the design, not something that can be fixed given enough 
 time/eyeballs.
 Ideally a solution would give us
 - similar performance
 - less special cases in the code
 - potential for a retry mechanism



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-4775) Counters 2.0

2013-12-18 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852493#comment-13852493
 ] 

Aleksey Yeschenko commented on CASSANDRA-4775:
--

So, this thread has become quite overloaded. Will summarize it shortly in this 
comment, and then move the actual work/discussion to CASSANDRA-6504.

The initial idea for the new design (a new cell for each increment/decrement, 
then summing up on reads) and its variations didn't work out, for one reason or 
another. The largest problems are the required coordination for collapsing the 
increment history and difficulty in making it backward compatible with the 
current implementation.

We decided to go for incremental improvements instead - namely, stop using 
'local' shards altogether, and do explicit read-modify-write with just one 
shard type ('global') instead. See 
https://issues.apache.org/jira/browse/CASSANDRA-4775?focusedCommentId=13702042page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13702042
 and the comments following it (plus 
https://issues.apache.org/jira/browse/CASSANDRA-4071?focusedCommentId=13483381page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13483381).

This will fix, *at minimum*, the over counting issue with commit log replay, 
CASSANDRA-4417, and CASSANDRA-4071, and, together with some related 
improvements, drastically simplify counters code in general.


 Counters 2.0
 

 Key: CASSANDRA-4775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4775
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Arya Goudarzi
Assignee: Aleksey Yeschenko
  Labels: counters
 Fix For: 2.1


 The existing partitioned counters remain a source of frustration for most 
 users almost two years after being introduced.  The remaining problems are 
 inherent in the design, not something that can be fixed given enough 
 time/eyeballs.
 Ideally a solution would give us
 - similar performance
 - less special cases in the code
 - potential for a retry mechanism



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)
Pardeep Singh created CASSANDRA-6509:


 Summary: CQL collection list throws error on delete (hiding the 
error will help)
 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor


Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
insert to list by prepending an ID, then deleting from the end of the list. I 
can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pardeep Singh updated CASSANDRA-6509:
-

Description: 
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.

  was:
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
insert to list by prepending an ID, then deleting from the end of the list. I 
can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.


 CQL collection list throws error on delete (hiding the error will help)
 ---

 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor
  Labels: cql3

 Currently as of CQL 3.1, collection list query throws error.
 DELETE scores[1] FROM plays WHERE id = '123-afde';   // 
 deletes the 2nd element of scores (raises an error is scores has less than 2 
 elements)
 The above query is not an issue for single queries since I can ignore the 
 error. But if I do a batch, the above query will fail and the batch will fail 
 also.
 I was trying to accomplish this:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = '123-afde';
 DELETE scores[499] FROM plays WHERE id = '144-afde';
 APPLY BATCH;
 My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
 inserting to list by prepending an ID, then deleting from the end of the 
 list. I can deal with it if the list is not exact 500 posts, the point is to 
 keep it close to that number.
 I'm doing this in bulk so using BATCH helps improve performance and I'm also 
 using UNLOGGED BATCH since deleting is not critical part of the process.
 By not throwing error on the above query, other use cases can be implemented:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = 'aaa';
 UPDATE scores=[2]+scores WHERE id='aaa';
 APPLY BATCH;
 By using atomic BATCH, I can cap the list at 500 elements.
 It would help, even if you can provide a way to bypass the delete error using 
 some special directive so the BATCH can still be processed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pardeep Singh updated CASSANDRA-6509:
-

Description: 
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.

  was:
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.


 CQL collection list throws error on delete (hiding the error will help)
 ---

 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor
  Labels: cql3

 Currently as of CQL 3.1, collection list query throws error.
 DELETE scores[1] FROM plays WHERE id = '123-afde';   // 
 deletes the 2nd element of scores (raises an error is scores has less than 2 
 elements)
 The above query is not an issue for single queries since I can ignore the 
 error. But if I do a batch, the above query will fail and the batch will fail 
 also.
 I was trying to accomplish this:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = '123-afde';
 DELETE scores[499] FROM plays WHERE id = '144-afde';
 APPLY BATCH;
 My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
 inserting to list by prepending an ID, then deleting from the end of the 
 list. I can deal with it if the list is not exact 500 posts, the point is to 
 keep it close to that number.
 I'm doing this in bulk so using BATCH helps improve performance and I'm also 
 using UNLOGGED BATCH since deleting is not critical part of the process.
 By not throwing error on the above query, other use cases can be implemented:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = 'aaa';
 UPDATE scores=[2]+scores WHERE id='aaa';
 APPLY BATCH;
 By using atomic BATCH, I can cap the list at 500 elements.
 It would help, even if you can provide a way to bypass the delete error using 
 some special directive so the BATCH can still be processed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pardeep Singh updated CASSANDRA-6509:
-

Description: 
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN BATCH
UPDATE scores=[2]+scores WHERE id='aaa';
DELETE scores[500] FROM plays WHERE id = 'aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.

  was:
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = 'aaa';
UPDATE scores=[2]+scores WHERE id='aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.


 CQL collection list throws error on delete (hiding the error will help)
 ---

 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor
  Labels: cql3

 Currently as of CQL 3.1, collection list query throws error.
 DELETE scores[1] FROM plays WHERE id = '123-afde';   // 
 deletes the 2nd element of scores (raises an error is scores has less than 2 
 elements)
 The above query is not an issue for single queries since I can ignore the 
 error. But if I do a batch, the above query will fail and the batch will fail 
 also.
 I was trying to accomplish this:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = '123-afde';
 DELETE scores[499] FROM plays WHERE id = '144-afde';
 APPLY BATCH;
 My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
 inserting to list by prepending an ID, then deleting from the end of the 
 list. I can deal with it if the list is not exact 500 posts, the point is to 
 keep it close to that number.
 I'm doing this in bulk so using BATCH helps improve performance and I'm also 
 using UNLOGGED BATCH since deleting is not critical part of the process.
 By not throwing error on the above query, other use cases can be implemented:
 BEGIN BATCH
 UPDATE scores=[2]+scores WHERE id='aaa';
 DELETE scores[500] FROM plays WHERE id = 'aaa';
 APPLY BATCH;
 By using atomic BATCH, I can cap the list at 500 elements.
 It would help, even if you can provide a way to bypass the delete error using 
 some special directive so the BATCH can still be processed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pardeep Singh updated CASSANDRA-6509:
-

Labels: collections cql3 list  (was: cql3)

 CQL collection list throws error on delete (hiding the error will help)
 ---

 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor
  Labels: collections, cql3, list

 Currently as of CQL 3.1, collection list query throws error.
 DELETE scores[1] FROM plays WHERE id = '123-afde';   // 
 deletes the 2nd element of scores (raises an error is scores has less than 2 
 elements)
 The above query is not an issue for single queries since I can ignore the 
 error. But if I do a batch, the above query will fail and the batch will fail 
 also.
 I was trying to accomplish this:
 BEGIN UNLOGGED BATCH
 DELETE scores[499] FROM plays WHERE id = '123-afde';
 DELETE scores[499] FROM plays WHERE id = '144-afde';
 APPLY BATCH;
 My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
 inserting to list by prepending an ID, then deleting from the end of the 
 list. I can deal with it if the list is not exact 500 posts, the point is to 
 keep it close to that number.
 I'm doing this in bulk so using BATCH helps improve performance and I'm also 
 using UNLOGGED BATCH since deleting is not critical part of the process.
 By not throwing error on the above query, other use cases can be implemented:
 BEGIN BATCH
 UPDATE scores=[2]+scores WHERE id='aaa';
 DELETE scores[500] FROM plays WHERE id = 'aaa';
 APPLY BATCH;
 By using atomic BATCH, I can cap the list at 500 elements.
 It would help, even if you can provide a way to bypass the delete error using 
 some special directive so the BATCH can still be processed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6509) CQL collection list throws error on delete (hiding the error will help)

2013-12-18 Thread Pardeep Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pardeep Singh updated CASSANDRA-6509:
-

Description: 
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
UPDATE plays SET scores=[2]+scores WHERE id='aaa';
UPDATE plays SET scores=[2]+scores WHERE id='bbb';
DELETE scores[500] FROM plays WHERE id = 'aaa';
DELETE scores[500] FROM plays WHERE id = 'bbb';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN BATCH
UPDATE plays SET scores=[2]+scores WHERE id='aaa';
DELETE scores[500] FROM plays WHERE id = 'aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.

  was:
Currently as of CQL 3.1, collection list query throws error.

DELETE scores[1] FROM plays WHERE id = '123-afde';   // deletes 
the 2nd element of scores (raises an error is scores has less than 2 elements)
The above query is not an issue for single queries since I can ignore the 
error. But if I do a batch, the above query will fail and the batch will fail 
also.

I was trying to accomplish this:
BEGIN UNLOGGED BATCH
DELETE scores[499] FROM plays WHERE id = '123-afde';
DELETE scores[499] FROM plays WHERE id = '144-afde';
APPLY BATCH;

My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
inserting to list by prepending an ID, then deleting from the end of the list. 
I can deal with it if the list is not exact 500 posts, the point is to keep it 
close to that number.
I'm doing this in bulk so using BATCH helps improve performance and I'm also 
using UNLOGGED BATCH since deleting is not critical part of the process.

By not throwing error on the above query, other use cases can be implemented:
BEGIN BATCH
UPDATE scores=[2]+scores WHERE id='aaa';
DELETE scores[500] FROM plays WHERE id = 'aaa';
APPLY BATCH;
By using atomic BATCH, I can cap the list at 500 elements.

It would help, even if you can provide a way to bypass the delete error using 
some special directive so the BATCH can still be processed.


 CQL collection list throws error on delete (hiding the error will help)
 ---

 Key: CASSANDRA-6509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6509
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pardeep Singh
Priority: Minor
  Labels: collections, cql3, list

 Currently as of CQL 3.1, collection list query throws error.
 DELETE scores[1] FROM plays WHERE id = '123-afde';   // 
 deletes the 2nd element of scores (raises an error is scores has less than 2 
 elements)
 The above query is not an issue for single queries since I can ignore the 
 error. But if I do a batch, the above query will fail and the batch will fail 
 also.
 I was trying to accomplish this:
 BEGIN UNLOGGED BATCH
 UPDATE plays SET scores=[2]+scores WHERE id='aaa';
 UPDATE plays SET scores=[2]+scores WHERE id='bbb';
 DELETE scores[500] FROM plays WHERE id = 'aaa';
 DELETE scores[500] FROM plays WHERE id = 'bbb';
 APPLY BATCH;
 My main goal is to keep a list of 500 recent posts, delete the rest. So I'm 
 inserting to list by prepending an ID, then deleting from the end of the 
 list. I can deal with it if the list is not exact 500 posts, the point is to 
 keep it close to that number.
 I'm doing this in bulk so using BATCH helps improve performance and I'm also 
 using UNLOGGED BATCH since deleting is not critical part of the process.
 By not throwing error on the above query, other use cases can be implemented:
 BEGIN BATCH
 UPDATE plays SET scores=[2]+scores WHERE id='aaa';
 DELETE scores[500] FROM plays WHERE id = 'aaa';
 APPLY BATCH;
 By using atomic BATCH, I can cap the list at 500 elements.
 It would help, even if you can provide a way to bypass the delete error using 
 some special 

git commit: remove dead code

2013-12-18 Thread dbrosius
Updated Branches:
  refs/heads/trunk 435f1b72c - d276d0a06


remove dead code


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d276d0a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d276d0a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d276d0a0

Branch: refs/heads/trunk
Commit: d276d0a0638ff6d80ce8749d2afc1eaa5cfbb14a
Parents: 435f1b7
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Dec 18 23:27:47 2013 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Dec 18 23:27:47 2013 -0500

--
 test/unit/org/apache/cassandra/SchemaLoader.java | 11 ---
 .../unit/org/apache/cassandra/db/SerializationsTest.java |  2 +-
 2 files changed, 5 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d276d0a0/test/unit/org/apache/cassandra/SchemaLoader.java
--
diff --git a/test/unit/org/apache/cassandra/SchemaLoader.java 
b/test/unit/org/apache/cassandra/SchemaLoader.java
index 90dc629..df74108 100644
--- a/test/unit/org/apache/cassandra/SchemaLoader.java
+++ b/test/unit/org/apache/cassandra/SchemaLoader.java
@@ -34,7 +34,6 @@ import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.commitlog.CommitLog;
 import org.apache.cassandra.db.compaction.LeveledCompactionStrategy;
-import org.apache.cassandra.db.filter.QueryFilter;
 import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.gms.Gossiper;
@@ -51,12 +50,12 @@ public class SchemaLoader
 private static Logger logger = LoggerFactory.getLogger(SchemaLoader.class);
 
 @BeforeClass
-public static void loadSchema() throws IOException, ConfigurationException
+public static void loadSchema() throws ConfigurationException
 {
 loadSchema(false);
 }
 
-public static void loadSchema(boolean withOldCfIds) throws IOException, 
ConfigurationException
+public static void loadSchema(boolean withOldCfIds) throws 
ConfigurationException
 {
 // Cleanup first
 cleanupAndLeaveDirs();
@@ -115,8 +114,6 @@ public class SchemaLoader
 MapString, String opts_rf3 = KSMetaData.optsWithRF(3);
 MapString, String opts_rf5 = KSMetaData.optsWithRF(5);
 
-ColumnFamilyType st = ColumnFamilyType.Standard;
-ColumnFamilyType su = ColumnFamilyType.Super;
 AbstractType bytes = BytesType.instance;
 
 AbstractType? composite = 
CompositeType.getInstance(Arrays.asList(new 
AbstractType?[]{BytesType.instance, TimeUUIDType.instance, 
IntegerType.instance}));
@@ -397,7 +394,7 @@ public class SchemaLoader
 DatabaseDescriptor.createAllDirectories();
 }
 
-protected void insertData(String keyspace, String columnFamily, int 
offset, int numberOfRows) throws IOException
+protected void insertData(String keyspace, String columnFamily, int 
offset, int numberOfRows)
 {
 for (int i = offset; i  offset + numberOfRows; i++)
 {
@@ -409,7 +406,7 @@ public class SchemaLoader
 }
 
 /* usually used to populate the cache */
-protected void readData(String keyspace, String columnFamily, int offset, 
int numberOfRows) throws IOException
+protected void readData(String keyspace, String columnFamily, int offset, 
int numberOfRows)
 {
 ColumnFamilyStore store = 
Keyspace.open(keyspace).getColumnFamilyStore(columnFamily);
 for (int i = offset; i  offset + numberOfRows; i++)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d276d0a0/test/unit/org/apache/cassandra/db/SerializationsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/SerializationsTest.java 
b/test/unit/org/apache/cassandra/db/SerializationsTest.java
index e3a6077..983a8f7 100644
--- a/test/unit/org/apache/cassandra/db/SerializationsTest.java
+++ b/test/unit/org/apache/cassandra/db/SerializationsTest.java
@@ -50,7 +50,7 @@ public class SerializationsTest extends 
AbstractSerializationsTester
 Statics statics = new Statics();
 
 @BeforeClass
-public static void loadSchema() throws IOException, ConfigurationException
+public static void loadSchema() throws ConfigurationException
 {
 loadSchema(true);
 }



[jira] [Commented] (CASSANDRA-6495) LOCAL_SERIAL use QUORAM consistency level to validate expected columns

2013-12-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852668#comment-13852668
 ] 

sankalp kohli commented on CASSANDRA-6495:
--

[~jbellis]
For this should we use LOCAL_QUORAM for LOCAL_SERIAL or use the consistency 
level of commit. I think adding a third CL for this will be too confusing. So I 
think we can use CL of commit for validating columns.  

 LOCAL_SERIAL  use QUORAM consistency level to validate expected columns
 ---

 Key: CASSANDRA-6495
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6495
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor

 If CAS is done at LOCAL_SERIAL consistency level, only the nodes from the 
 local data center should be involved. 
 Here we are using QUORAM to validate the expected columns. This will require 
 nodes from more than one DC. 
 We should use LOCAL_QUORAM here when CAS is done at LOCAL_SERIAL. 
 Also if we have 2 DCs with DC1:3,DC2:3, a single DC down will cause CAS to 
 not work even for LOCAL_SERIAL. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6507) counters++ get rid of logical clock in global shards

2013-12-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852713#comment-13852713
 ] 

Sylvain Lebresne commented on CASSANDRA-6507:
-

I think I'm not sure about this one. Or at least:
* if/when we do CASSANDRA-6506, the clock becomes the cell timestamp and we 
don't have to have any special reconciliation rule for counter cells (which 
will thus be entirely normal cell). If we do this ticket, it's not true 
anymore, reconciliation will remain based on the cell value and we'll still 
need to have special reconciliation rules.
* it feels like the code we'll need while we switch to that will be pretty ugly 
(since basically the new global shard would have 2 counts while the old one 
will still have count and clock, and both will sometimes be in the same 
context). Plus we'd need to be extra careful about the merge rules between old 
and new shard. Very probably doable, but likely a tad ugly.
* saying that this would make CASSANDRA-4417 impossible is a bit disingenuous. 
If we were using a increment+decrement today, this wouldn't make the underlying 
problems of CASSANDRA-4417 go away and we'd still end up with potentially 
corrupted counters. The only difference might be that we wouldn't have a good 
way to detect it and log an error, but that's hardly an advantage.


 counters++ get rid of logical clock in global shards
 

 Key: CASSANDRA-6507
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6507
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1


 In CASSANDRA-6504 the global shards still follow the {count, logical clock} 
 pattern of the legacy shards. We could store the {increments, decrements} 
 tuple in the shard instead, and for reconcile, instead of relying on the 
 logical clock, pick the largest value of `increments` and `decrements` of the 
 two shards, and use that.
 E.g., shard1: {2000, 1001} (total 999), shard2: {2001, 1000} (total 1001). 
 reconciled = {max(2000, 2001), max(1001, 1000)} = {2001, 1001} (total 1000).
 While scenarios like this generally shouldn't happen post CASSANDRA-6504, 
 this change costs us nothing, and makes issues like CASSANDRA-4417 
 theoretically impossible . This also makes our new implementation directly 
 follow the http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf white 
 paper.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)