[jira] [Updated] (CASSANDRA-5631) NPE when creating column family shortly after multinode startup

2014-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5631:
-

Attachment: 5631.txt

 NPE when creating column family shortly after multinode startup
 ---

 Key: CASSANDRA-5631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Martin Serrano
Assignee: Aleksey Yeschenko
 Attachments: 5631.txt


 I'm testing a 2-node cluster and creating a column family right after the 
 nodes startup.  I am using the Astyanax client.  Sometimes column family 
 creation fails and I see NPEs on the cassandra server:
 {noformat}
 2013-06-12 14:55:31,773 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:510)
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:444)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 {noformat}
 2013-06-12 14:55:31,880 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:475)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5631) NPE when creating column family shortly after multinode startup

2014-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5631:
-

Reviewer: Sylvain Lebresne

 NPE when creating column family shortly after multinode startup
 ---

 Key: CASSANDRA-5631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Martin Serrano
Assignee: Aleksey Yeschenko
 Attachments: 5631.txt


 I'm testing a 2-node cluster and creating a column family right after the 
 nodes startup.  I am using the Astyanax client.  Sometimes column family 
 creation fails and I see NPEs on the cassandra server:
 {noformat}
 2013-06-12 14:55:31,773 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:510)
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:444)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 {noformat}
 2013-06-12 14:55:31,880 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:475)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Support negative timestamps in DateType.fromString

2014-02-19 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 6dfca3d32 - e787b7a4c


Support negative timestamps in DateType.fromString

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6718


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e787b7a4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e787b7a4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e787b7a4

Branch: refs/heads/cassandra-1.2
Commit: e787b7a4c7f69cf486c7d5b6c53bfb88086b5261
Parents: 6dfca3d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:41:21 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:41:21 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f146166..47fc3a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+ * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 875169d..ad165ee 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -92,7 +92,7 @@ public class DateType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (source.matches(^-?\\d+$))
   {
   try
   {



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0520aeb7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0520aeb7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0520aeb7

Branch: refs/heads/cassandra-2.0
Commit: 0520aeb7e751626b62268c1495c941d33b01cdfb
Parents: e67a0a9 e787b7a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:42:25 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:42:25 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/CHANGES.txt
--
diff --cc CHANGES.txt
index 95faf18,47fc3a3..2cacbaa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,34 -10,24 +30,35 @@@ Merged from 1.2
   * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/src/java/org/apache/cassandra/db/marshal/DateType.java
--



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-19 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89b8b1a7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89b8b1a7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89b8b1a7

Branch: refs/heads/trunk
Commit: 89b8b1a757f17ec8e9de476468b9cb6b9fb42832
Parents: d5e9644 0520aeb
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:43:34 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:43:34 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/89b8b1a7/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89b8b1a7/src/java/org/apache/cassandra/db/marshal/DateType.java
--



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0520aeb7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0520aeb7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0520aeb7

Branch: refs/heads/trunk
Commit: 0520aeb7e751626b62268c1495c941d33b01cdfb
Parents: e67a0a9 e787b7a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:42:25 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:42:25 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/CHANGES.txt
--
diff --cc CHANGES.txt
index 95faf18,47fc3a3..2cacbaa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,34 -10,24 +30,35 @@@ Merged from 1.2
   * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/src/java/org/apache/cassandra/db/marshal/DateType.java
--



[1/3] git commit: Support negative timestamps in DateType.fromString

2014-02-19 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk d5e9644f2 - 89b8b1a75


Support negative timestamps in DateType.fromString

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6718


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e787b7a4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e787b7a4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e787b7a4

Branch: refs/heads/trunk
Commit: e787b7a4c7f69cf486c7d5b6c53bfb88086b5261
Parents: 6dfca3d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:41:21 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:41:21 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f146166..47fc3a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+ * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 875169d..ad165ee 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -92,7 +92,7 @@ public class DateType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (source.matches(^-?\\d+$))
   {
   try
   {



[1/2] git commit: Support negative timestamps in DateType.fromString

2014-02-19 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e67a0a981 - 0520aeb7e


Support negative timestamps in DateType.fromString

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6718


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e787b7a4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e787b7a4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e787b7a4

Branch: refs/heads/cassandra-2.0
Commit: e787b7a4c7f69cf486c7d5b6c53bfb88086b5261
Parents: 6dfca3d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:41:21 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:41:21 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f146166..47fc3a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+ * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 875169d..ad165ee 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -92,7 +92,7 @@ public class DateType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (source.matches(^-?\\d+$))
   {
   try
   {



[jira] [Updated] (CASSANDRA-6726) Recycle CRAR/RAR buffers independently of their owners, and move them off-heap when possible

2014-02-19 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6726:


Description: Whilst CRAR and RAR are pooled, we could and probably should 
pool the buffers independently, so that they are not tied to a specific 
sstable. It may be possible to move the RAR buffer off-heap, and the CRAR 
sometimes (e.g. Snappy may possibly support off-heap buffers)  (was: It seems 
like this should be a reasonably easy and quick win.)
 Issue Type: Improvement  (was: Bug)
Summary: Recycle CRAR/RAR buffers independently of their owners, and 
move them off-heap when possible  (was: Recycle CompressedRandomAccessReader 
and RandomAccessReader buffers, and move them off-heap)

 Recycle CRAR/RAR buffers independently of their owners, and move them 
 off-heap when possible
 

 Key: CASSANDRA-6726
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6726
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1


 Whilst CRAR and RAR are pooled, we could and probably should pool the buffers 
 independently, so that they are not tied to a specific sstable. It may be 
 possible to move the RAR buffer off-heap, and the CRAR sometimes (e.g. Snappy 
 may possibly support off-heap buffers)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6718) DateType should accept pre-epoch (negative) timestamp in fromString

2014-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6718.
-

Resolution: Fixed
  Reviewer: Aleksey Yeschenko

Committed, thanks

 DateType should accept pre-epoch (negative) timestamp in fromString
 ---

 Key: CASSANDRA-6718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.16

 Attachments: 6718.txt


 DateType.fromString() method doesn't accept negative timestamp for pre-epoch 
 timestamps. It really just seem to be an oversight, the regexp that recognize 
 digits no recognizing the minus sign, because DateType has no problem with 
 negative timestamp otherwise, and in fact accept then if you provide them as 
 date strings.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-02-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905320#comment-13905320
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

I think, not stopping java in general allows deleting, but stopping the 
cassandra process which keeps file handles open. When I first studied the C* 
code, I found it difficult to see, how filehandles are spread around in 
different classes (?). Not easy, to keep the overview for me, but maybe its 
because of not enough java experience compared to C++ where I had to care for 
handles as well (file, memory). Later in C++, there were autopointers 
(boost/TR1) with a finalizer-like approach, where memory was deleted 
automatically (analog to closing files), when handle went out of scope or the 
programmer forgot to free/close. I don't know about the linux' process 
explorer, where filehandles are expected. But if I probably close every 
filehandle after use (even only reading), does it matter, what's in this 
process explorer, as long C* is able to delete? BTW if only a read process 
don't close the file handle, it makes already sense to forbid to delete, as 
long the os knows, the file is still used any way. For me, it's a failure of 
the os as manager, to allow delete a file, as long there are any filehandles on 
it. This strict permission allows me as well, to detect failures in my program 
logic.

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: compaction
 Fix For: 2.0.6

 Attachments: leakdetect.patch, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-02-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905320#comment-13905320
 ] 

Andreas Schnitzerling edited comment on CASSANDRA-6283 at 2/19/14 11:54 AM:


I think, not stopping java in general allows deleting, but stopping the 
cassandra process which keeps file handles open. When I first studied the C* 
code, I found it difficult to see, how filehandles are spread around in 
different classes (?). Not easy, to keep the overview for me, but maybe its 
because of not enough java experience compared to C++ where I had to care for 
handles as well (file, memory). Later in C++, there were autopointers 
(boost/TR1) with a finalizer-like approach, where memory was deleted 
automatically (analog to closing files), when handle went out of scope or the 
programmer forgot to free/close. I don't know about the linux' process 
explorer, where filehandles are expected. But if I probably close every 
filehandle after use (even only reading), does it matter, what's in this 
process explorer, as long C* is able to delete? BTW if only a read process 
don't close the file handle, it makes already sense for me to forbid to delete, 
as long the os knows, the file is still used any way, except the user has to 
kill or restart the process, where the handles automatically disappear, mostly 
caused by problems. For me, it's a failure of the os as manager, to allow 
delete a file, as long there are any filehandles on it. This strict permission 
allows me as well, to detect failures in my program logic.


was (Author: andie78):
I think, not stopping java in general allows deleting, but stopping the 
cassandra process which keeps file handles open. When I first studied the C* 
code, I found it difficult to see, how filehandles are spread around in 
different classes (?). Not easy, to keep the overview for me, but maybe its 
because of not enough java experience compared to C++ where I had to care for 
handles as well (file, memory). Later in C++, there were autopointers 
(boost/TR1) with a finalizer-like approach, where memory was deleted 
automatically (analog to closing files), when handle went out of scope or the 
programmer forgot to free/close. I don't know about the linux' process 
explorer, where filehandles are expected. But if I probably close every 
filehandle after use (even only reading), does it matter, what's in this 
process explorer, as long C* is able to delete? BTW if only a read process 
don't close the file handle, it makes already sense to forbid to delete, as 
long the os knows, the file is still used any way. For me, it's a failure of 
the os as manager, to allow delete a file, as long there are any filehandles on 
it. This strict permission allows me as well, to detect failures in my program 
logic.

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: compaction
 Fix For: 2.0.6

 Attachments: leakdetect.patch, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6561) Static columns in CQL3

2014-02-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905373#comment-13905373
 ] 

Sylvain Lebresne commented on CASSANDRA-6561:
-

bq. his example query shows a bug with 2i lookup (with a 2i on a non-static 
column) + static columns.

Oh right, we do need to handle static columns there, my bad. Pushed an 
additional commit to the branch 
(https://github.com/pcmanus/cassandra/commits/6561-4) for that.

 Static columns in CQL3
 --

 Key: CASSANDRA-6561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6561
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6


 I'd like to suggest the following idea for adding static columns to CQL3.  
 I'll note that the basic idea has been suggested by jhalliday on irc but the 
 rest of the details are mine and I should be blamed for anything stupid in 
 what follows.
 Let me start with a rational: there is 2 main family of CF that have been 
 historically used in Thrift: static ones and dynamic ones. CQL3 handles both 
 family through the presence or not of clustering columns. There is however 
 some cases where mixing both behavior has its use. I like to think of those 
 use cases as 3 broad category:
 # to denormalize small amounts of not-entirely-static data in otherwise 
 static entities. It's say tags for a product or custom properties in a 
 user profile. This is why we've added CQL3 collections. Importantly, this is 
 the *only* use case for which collections are meant (which doesn't diminishes 
 their usefulness imo, and I wouldn't disagree that we've maybe not 
 communicated this too well).
 # to optimize fetching both a static entity and related dynamic ones. Say you 
 have blog posts, and each post has associated comments (chronologically 
 ordered). *And* say that a very common query is fetch a post and its 50 last 
 comments. In that case, it *might* be beneficial to store a blog post 
 (static entity) in the same underlying CF than it's comments for performance 
 reason.  So that fetch a post and it's 50 last comments is just one slice 
 internally.
 # you want to CAS rows of a dynamic partition based on some partition 
 condition. This is the same use case than why CASSANDRA-5633 exists for.
 As said above, 1) is already covered by collections, but 2) and 3) are not 
 (and
 I strongly believe collections are not the right fit, API wise, for those).
 Also, note that I don't want to underestimate the usefulness of 2). In most 
 cases, using a separate table for the blog posts and the comments is The 
 Right Solution, and trying to do 2) is premature optimisation. Yet, when used 
 properly, that kind of optimisation can make a difference, so I think having 
 a relatively native solution for it in CQL3 could make sense.
 Regarding 3), though CASSANDRA-5633 would provide one solution for it, I have 
 the feeling that static columns actually are a more natural approach (in term 
 of API). That's arguably more of a personal opinion/feeling though.
 So long story short, CQL3 lacks a way to mix both some static and dynamic 
 rows in the same partition of the same CQL3 table, and I think such a tool 
 could have it's use.
 The proposal is thus to allow static columns. Static columns would only 
 make sense in table with clustering columns (the dynamic ones). A static 
 column value would be static to the partition (all rows of the partition 
 would share the value for such column). The syntax would just be:
 {noformat}
 CREATE TABLE t (
   k text,
   s text static,
   i int,
   v text,
   PRIMARY KEY (k, i)
 )
 {noformat}
 then you'd get:
 {noformat}
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm shared,   0, foo);
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm still shared, 1, bar);
 SELECT * FROM t;
  k |  s | i |v
 
 k0 | I'm still shared | 0 | bar
 k0 | I'm still shared | 1 | foo
 {noformat}
 There would be a few semantic details to decide on regarding deletions, ttl, 
 etc. but let's see if we agree it's a good idea first before ironing those 
 out.
 One last point is the implementation. Though I do think this idea has merits, 
 it's definitively not useful enough to justify rewriting the storage engine 
 for it. But I think we can support this relatively easily (emphasis on 
 relatively :)), which is probably the main reason why I like the approach.
 Namely, internally, we can store static columns as cells whose clustering 
 column values are empty. So in terms of cells, the partition of my example 
 would look like:
 {noformat}
 k0 : [
   (:s - I'm still shared), // the static column
   (0: - )  // row marker
   (0:v - bar)
   (1: - )  // row marker
   (1:v - foo)
 ]
 

[jira] [Commented] (CASSANDRA-5631) NPE when creating column family shortly after multinode startup

2014-02-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905402#comment-13905402
 ] 

Sylvain Lebresne commented on CASSANDRA-5631:
-

Lgtm (nit: I'd rename serializeKeyspace to say addSerializedKeyspace).

bq. Can you not just wait for schema agreement in your client before going on 
to the next create?

For the record, Jeremiah is right that clients are supposed to wait for schema 
agreement if they want to guarantee the table creation won't fail just after 
the keyspace one (or alternatively make sure both creation goes through the 
same coordinator node). Of course, we shouldn't NPE internally if a user don't 
respect that and that's just what this ticket is about.

 NPE when creating column family shortly after multinode startup
 ---

 Key: CASSANDRA-5631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Martin Serrano
Assignee: Aleksey Yeschenko
 Fix For: 1.2.16, 2.0.6, 2.1

 Attachments: 5631.txt


 I'm testing a 2-node cluster and creating a column family right after the 
 nodes startup.  I am using the Astyanax client.  Sometimes column family 
 creation fails and I see NPEs on the cassandra server:
 {noformat}
 2013-06-12 14:55:31,773 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:510)
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:444)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 {noformat}
 2013-06-12 14:55:31,880 ERROR CassandraDaemon [MigrationStage:1] - Exception 
 in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
   at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:475)
   at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:354)
   at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6732) Cross DC writes not compatible 1.2-1.1 during rolling upgrade

2014-02-19 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-6732:
--

 Summary: Cross DC writes not compatible 1.2-1.1 during rolling 
upgrade
 Key: CASSANDRA-6732
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6732
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremiah Jordan


During a rolling upgrade from 1.1.12 to 1.2.15 one DC at a time only 1/3 of the 
writes to the first DC to be upgraded actually make it to the other DC, and 
LOTS of hints attempt to get made.

Looks like the header for forwarded writes changed from 1.1-1.2 so the 1.1 
nodes can't read it.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6732) Cross DC writes not compatible 1.2-1.1 during rolling upgrade

2014-02-19 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-6732:
---

Attachment: 0001-Don-t-try-to-do-Cross-DC-forwarding-to-old-nodes.patch

 Cross DC writes not compatible 1.2-1.1 during rolling upgrade
 --

 Key: CASSANDRA-6732
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6732
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremiah Jordan
 Fix For: 1.2.16

 Attachments: 
 0001-Don-t-try-to-do-Cross-DC-forwarding-to-old-nodes.patch


 During a rolling upgrade from 1.1.12 to 1.2.15 one DC at a time only 1/3 of 
 the writes to the first DC to be upgraded actually make it to the other DC, 
 and LOTS of hints attempt to get made.
 Looks like the header for forwarded writes changed from 1.1-1.2 so the 1.1 
 nodes can't read it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6732) Cross DC writes not compatible 1.2-1.1 during rolling upgrade

2014-02-19 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905405#comment-13905405
 ] 

Jeremiah Jordan commented on CASSANDRA-6732:


Simple fix of not trying to Cross DC forwarding to old nodes attached.  If 
someone wants they can do a more complicated fix that uses the old 1.1 headers 
for 1.1 nodes.

 Cross DC writes not compatible 1.2-1.1 during rolling upgrade
 --

 Key: CASSANDRA-6732
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6732
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremiah Jordan
 Fix For: 1.2.16

 Attachments: 
 0001-Don-t-try-to-do-Cross-DC-forwarding-to-old-nodes.patch


 During a rolling upgrade from 1.1.12 to 1.2.15 one DC at a time only 1/3 of 
 the writes to the first DC to be upgraded actually make it to the other DC, 
 and LOTS of hints attempt to get made.
 Looks like the header for forwarded writes changed from 1.1-1.2 so the 1.1 
 nodes can't read it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6660) Make node tool command take a password file

2014-02-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clément Lardeur updated CASSANDRA-6660:
---

Attachment: trunk-6660-v3.patch

I removed it in the code but I had already made the patch. My bad.

 Make node tool command take a password file
 ---

 Key: CASSANDRA-6660
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6660
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Clément Lardeur
Priority: Trivial
  Labels: nodetool
 Fix For: 2.1

 Attachments: trunk-6660-v1.patch, trunk-6660-v2.patch, 
 trunk-6660-v3.patch


 We are sending the jmx password in the clear to the node tool command in 
 production. This is a security risk. Any one doing a 'ps' can see the clear 
 password. Can we change the node tool command to also take a password file 
 argument. This file will list the JMX user and passwords. Example below:
 cat /cassandra/run/10003004.jmxpasswd
 monitorRole abc
 controlRole def
 Based on the user name provided, node tool can pick up the right password. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6134) More efficient BatchlogManager

2014-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905471#comment-13905471
 ] 

Jonathan Ellis commented on CASSANDRA-6134:
---

Are you planning to pick this back up, [~m0nstermind]?

 More efficient BatchlogManager
 --

 Key: CASSANDRA-6134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6134
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev
Assignee: Oleg Anastasyev
Priority: Minor
 Attachments: BatchlogManager.txt


 As we discussed earlier in CASSANDRA-6079 this is the new BatchManager.
 It stores batch records in 
 {code}
 CREATE TABLE batchlog (
   id_partition int,
   id timeuuid,
   data blob,
   PRIMARY KEY (id_partition, id)
 ) WITH COMPACT STORAGE AND
   CLUSTERING ORDER BY (id DESC)
 {code}
 where id_partition is minute-since-epoch of id uuid. 
 So when it scans for batches to replay ot scans within a single partition for 
  a slice of ids since last processed date till now minus write timeout.
 So no full batchlog CF scan and lot of randrom reads are made on normal 
 cycle. 
 Other improvements:
 1. It runs every 1/2 of write timeout and replays all batches written within 
 0.9 * write timeout from now. This way we ensure, that batched updates will 
 be replayed to th moment client times out from coordinator.
 2. It submits all mutations from single batch in parallel (Like StorageProxy 
 do). Old implementation played them one-by-one, so client can see half 
 applied batches in CF for a long time (depending on size of batch).
 3. It fixes a subtle racing bug with incorrect hint ttl calculation



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6733) Upgrade of 1.2.11 to 2.0.5 make IllegalArgumentException in Buffer.limit on read of a super column family

2014-02-19 Thread JIRA
Nicolas Lalevée created CASSANDRA-6733:
--

 Summary: Upgrade of 1.2.11 to 2.0.5 make IllegalArgumentException 
in Buffer.limit on read of a super column family
 Key: CASSANDRA-6733
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6733
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée


We have a super column family which was first created with a 1.0.x. Then 
upgraded to 1.1.x, then to 1.2.11, and now to 2.0.5.
{noformat}
cqlsh:QaUser desc table user_view;
CREATE TABLE user_view (
  key bigint,
  column1 varint,
  column2 text,
  value counter,
  PRIMARY KEY (key, column1, column2)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=1.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};
{noformat}

With cqlsh, the following query was doing a timeout:
{noformat}
select * from user_view where key = 3 and column1 = 1 and column2 = '20130218';
{noformat}

In the log of cassandra, we could read:
{noformat}
ERROR [ReadStage:1385] 2014-02-19 14:45:19,549 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:1385,5,main]
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:55)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:64)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:82)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
at 
org.apache.cassandra.db.marshal.AbstractType$1.compare(AbstractType.java:63)
at 
org.apache.cassandra.db.marshal.AbstractType$1.compare(AbstractType.java:60)
at java.util.Collections.indexedBinarySearch(Collections.java:377)
at java.util.Collections.binarySearch(Collections.java:365)
at 
org.apache.cassandra.io.sstable.IndexHelper.indexFor(IndexHelper.java:144)
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.setNextSlice(IndexedSliceReader.java:262)
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.init(IndexedSliceReader.java:255)
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.init(IndexedSliceReader.java:91)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
{noformat}

I tried launching repair on our 2 nodes, nothing improved.
I tried launching a major compaction on this column family, the query doesn't 
fail anymore and return expected results;

This happens on our cluster which is used for integration and test purpose, not 
much activity on it. There are only 2 nodes and the replication factor is at 1. 
Since it is our test cluster, I have a quite small (2 x ~500K) snapshot done 
before the upgrade of the cluster I could share, if needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6721) READ-STAGE: IllegalArgumentException when re-reading wide row immediately upon creation

2014-02-19 Thread Bill Mitchell (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905481#comment-13905481
 ] 

Bill Mitchell commented on CASSANDRA-6721:
--

Thank you for the suggestion on the the saved_caches.  That was helpful, as 
there were entries there for testdb, even after the database was dropped.  They 
were timestamped at 1419 yesterday with a repeat of the failure seen in the 
attached 2014-02-18-13-45 log.   Removing these and restarting the server gave 
it a chance to forget about their data so that it was not applied to a new 
instance of the keyspace with the same name.  

As these two logs also displayed the LEAK finalizer message from the 
leakdetect.patch, and the earlier failures did not, it is still possible that 
they represent a different, earlier failure.  I will need to make runs over 
several days to see if this problem reappears, checking meanwhile to see if 
there are entries in saved_caches before the test begins. 

The first time I tried this on a larger test, it ran into a similar cross 
keyspace instance contamination that I figured out a few days ago, where 
transactions left in the commit log from an earlier transaction are replayed 
after the new keyspace is created.  

 READ-STAGE: IllegalArgumentException when re-reading wide row immediately 
 upon creation  
 -

 Key: CASSANDRA-6721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6721
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 x64 dual core, 8GB memory, single Cassandra 
 node, Java 1.7.0_45
Reporter: Bill Mitchell
 Attachments: 2014-02-15.txt, 2014-02-17-21-05.txt, 
 2014-02-17-22-05.txt, 2014-02-18-13-45.txt


 In my test case, I am writing a wide row to one table, ordering the columns 
 in reverse chronogical order, newest to oldest, by insertion time.  A 
 simplified version of the schema:
 CREATE TABLE IF NOT EXISTS sr (s BIGINT, p INT, l BIGINT, ec TEXT, createDate 
 TIMESTAMP, k BIGINT, properties TEXT, PRIMARY KEY ((s, p, l), createDate, ec) 
 ) WITH CLUSTERING ORDER BY (createDate DESC) AND compression = 
 {'sstable_compression' : 'LZ4Compressor'} 
 Intermittently, after inserting 1,000,000 or 10,000,000 or more rows, when my 
 test immediately turns around and tries to read this partition in its 
 entirety, the client times out on the read and the Cassandra log looks like 
 the following:
 java.lang.RuntimeException: java.lang.IllegalArgumentException
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.lang.IllegalArgumentException
   at java.nio.Buffer.limit(Unknown Source)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:55)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:64)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:82)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
   at 
 org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:77)
   at 
 org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:74)
   at 
 org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:152)
   at 
 org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:129)
   at java.util.PriorityQueue.siftUpComparable(Unknown Source)
   at java.util.PriorityQueue.siftUp(Unknown Source)
   at java.util.PriorityQueue.offer(Unknown Source)
   at java.util.PriorityQueue.add(Unknown Source)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:90)
   at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
   at 
 

[jira] [Commented] (CASSANDRA-6704) Create wide row scanners

2014-02-19 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905485#comment-13905485
 ] 

Edward Capriolo commented on CASSANDRA-6704:


{quote}So in the context of my suggestion above, you get back a list of 
iterators. One iterator per partition? I would heartily endorse that because I 
almost suggested adding that additional complexity when I wrote it up in the 
first place.{quote}

I was actually suggesting that two scanners could implement something like 
http://en.wikipedia.org/wiki/Sort-merge_join across two column families. 
However there are other applications as well like you suggest inside a single 
row.



 Create wide row scanners
 

 Key: CASSANDRA-6704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6704
 Project: Cassandra
  Issue Type: New Feature
Reporter: Edward Capriolo
Assignee: Edward Capriolo

 The BigTable white paper demonstrates the use of scanners to iterate over 
 rows and columns. 
 http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf
 Because Cassandra does not have a primary sorting on row keys scanning over 
 ranges of row keys is less useful. 
 However we can use the scanner concept to operate on wide rows. For example 
 many times a user wishes to do some custom processing inside a row and does 
 not wish to carry the data across the network to do this processing. 
 I have already implemented thrift methods to compile dynamic groovy code into 
 Filters as well as some code that uses a Filter to page through and process 
 data on the server side.
 https://github.com/edwardcapriolo/cassandra/compare/apache:trunk...trunk
 The following is a working code snippet.
 {code}
 @Test
 public void test_scanner() throws Exception
 {
   ColumnParent cp = new ColumnParent();
   cp.setColumn_family(Standard1);
   ByteBuffer key = ByteBuffer.wrap(rscannerkey.getBytes());
   for (char a='a'; a  'g'; a++){
 Column c1 = new Column();
 c1.setName((a+).getBytes());
 c1.setValue(new byte [0]);
 c1.setTimestamp(System.nanoTime());
 server.insert(key, cp, c1, ConsistencyLevel.ONE);
   }
   
   FilterDesc d = new FilterDesc();
   d.setSpec(GROOVY_CLASS_LOADER);
   d.setName(limit3);
   d.setCode(import org.apache.cassandra.dht.* \n +
   import org.apache.cassandra.thrift.* \n +
   public class Limit3 implements SFilter { \n  +
   public FilterReturn filter(ColumnOrSuperColumn col, 
 ListColumnOrSuperColumn filtered) {\n+
filtered.add(col);\n+
return filtered.size() 3 ? FilterReturn.FILTER_MORE : 
 FilterReturn.FILTER_DONE;\n+
   } \n +
 }\n);
   server.create_filter(d);
   
   
   ScannerResult res = server.create_scanner(Standard1, limit3, key, 
 ByteBuffer.wrap(a.getBytes()));
   Assert.assertEquals(3, res.results.size());
 }
 {code}
 I am going to be working on this code over the next few weeks but I wanted to 
 get the concept our early so the design can see some criticism.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6716) nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist)

2014-02-19 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905486#comment-13905486
 ] 

Nikolai Grigoriev commented on CASSANDRA-6716:
--

OK, I am observing *massive* problems with the sstables as of moving from 2.0.4 
to 2.0.5. I am rolling back now and scrubbing (I wish I had Mr. Net ;) ). Just 
when scrubbing OpsCenter keyspaces I see tons of messages like this:

{quote}
WARN [CompactionExecutor:110] 2014-02-19 14:25:13,811 OutputHandler.java (line 
52) 1 out of order rows found while scrubbing 
SSTableReader(path='/hadoop/disk2/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-jb-1901-Data.db');
 Those have been written (in order) to a new sstable 
(SSTableReader(path='/hadoop/disk5/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-jb-15423-Data.db'))
{quote}

I am not exaggerating - dozens of thousands. To be fair, I am not 100% if the 
problem was there with 2.0.4. But as of 2.0.5 I have noticed the frequent 
exceptions about the key ordering, that caught my attention.

 nodetool scrub constantly fails with RuntimeException (Tried to hard link to 
 file that does not exist)
 --

 Key: CASSANDRA-6716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6716
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 
 1.7
Reporter: Nikolai Grigoriev
 Attachments: system.log.gz


 It seems that since recently I have started getting a number of exceptions 
 like File not found on all Cassandra nodes. Currently I am getting an 
 exception like this every couple of seconds on each node, for different 
 keyspaces and CFs.
 I have tried to restart the nodes, tried to scrub them. No luck so far. It 
 seems that scrub cannot complete on any of these nodes, at some point it 
 fails because of the file that it can't find.
 One one of the nodes currently the nodetool scrub command fails  instantly 
 and consistently with this exception:
 {code}
 # /opt/cassandra/bin/nodetool scrub 
 Exception in thread main java.lang.RuntimeException: Tried to hard link to 
 file that does not exist 
 /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db
   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122)
   at 
 org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   

[1/3] git commit: Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1 patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732

2014-02-19 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 e787b7a4c - c92b20b30
  refs/heads/cassandra-2.0 0520aeb7e - 55d8da44b


Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1
patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c92b20b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c92b20b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c92b20b3

Branch: refs/heads/cassandra-1.2
Commit: c92b20b3073f1c5cca3666225db33ea102ba77b5
Parents: e787b7a
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:32:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:32:40 2014 -0600

--
 CHANGES.txt | 8 ++--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47fc3a3..ffda82c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.2.16
+ * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
+   (CASSANDRA-6732)
  * Fix broken streams when replacing with same IP (CASSANDRA-6622)
  * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
  * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
@@ -6,8 +8,10 @@
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
  * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
- * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
- * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
+ * Use real node messaging versions for schema exchange decisions 
+   (CASSANDRA-6700)
+ * IN on the last clustering columns + ORDER BY DESC yield no results 
+   (CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index dbe029b..ca82a1f 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -614,7 +614,7 @@ public class StorageProxy implements StorageProxyMBean
 InetAddress target = iter.next();
 
 // direct writes to local DC or old Cassandra versions
-if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
+if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_12)
 {
 // yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 // creating a second iterator since we already have a perfectly 
good one



[3/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55d8da44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55d8da44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55d8da44

Branch: refs/heads/cassandra-2.0
Commit: 55d8da44bc270c1dcaef013dec287e9cb9b6865f
Parents: 0520aeb c92b20b
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:33:25 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:33:25 2014 -0600

--

--




[2/3] git commit: Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1 patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732

2014-02-19 Thread jbellis
Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1
patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c92b20b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c92b20b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c92b20b3

Branch: refs/heads/cassandra-2.0
Commit: c92b20b3073f1c5cca3666225db33ea102ba77b5
Parents: e787b7a
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:32:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:32:40 2014 -0600

--
 CHANGES.txt | 8 ++--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47fc3a3..ffda82c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.2.16
+ * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
+   (CASSANDRA-6732)
  * Fix broken streams when replacing with same IP (CASSANDRA-6622)
  * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
  * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
@@ -6,8 +8,10 @@
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
  * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
- * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
- * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
+ * Use real node messaging versions for schema exchange decisions 
+   (CASSANDRA-6700)
+ * IN on the last clustering columns + ORDER BY DESC yield no results 
+   (CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index dbe029b..ca82a1f 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -614,7 +614,7 @@ public class StorageProxy implements StorageProxyMBean
 InetAddress target = iter.next();
 
 // direct writes to local DC or old Cassandra versions
-if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
+if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_12)
 {
 // yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 // creating a second iterator since we already have a perfectly 
good one



[jira] [Commented] (CASSANDRA-6704) Create wide row scanners

2014-02-19 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905500#comment-13905500
 ] 

Edward Capriolo commented on CASSANDRA-6704:


Recent changes:

{code}
struct ScannerCreateDesc {
1:required string cfname,
2:required string filter_name,
3:required binary key,
4:optional binary start_column,
5:optional binary end_column,
6:required i32 slice_size,
7:optional ConsistencyLevel consistency_level=ConsistencyLevel.ONE,
8:optional mapstring,binary params
}
{code}

{code}
public interface ScanFilter {
  public FilterReturn filter(ColumnOrSuperColumn col, ScannerState state);
}
{code}

Adding the params allows us to create more generic scanners. Before the scanner 
Limit3 was implemented. However now we can do things like LIMIT X

{code}
  @Override
  public FilterReturn filter(ColumnOrSuperColumn col, ScannerState state) {
state.getFiltered().add(col);
return state.getFiltered().size() 
ByteBufferUtil.toInt(state.getParams().get(limit)) 
? FilterReturn.FILTER_MORE : FilterReturn.FILTER_DONE;
  }
{code}

Also thinking more broadly scanners could work from CQL, Hive has a feature 
called UDTF (https://issues.apache.org/jira/browse/HIVE-1614) that takes in 
zero to many columns and produces zero to many rows with one to many columns. 
this roughly equates to the Scanner interface I am working on. Session level 
tracking with need to record the position in the row so that a second 
disconnected query can pick up where the first left off. I will draft this up 
later.

 Create wide row scanners
 

 Key: CASSANDRA-6704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6704
 Project: Cassandra
  Issue Type: New Feature
Reporter: Edward Capriolo
Assignee: Edward Capriolo

 The BigTable white paper demonstrates the use of scanners to iterate over 
 rows and columns. 
 http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf
 Because Cassandra does not have a primary sorting on row keys scanning over 
 ranges of row keys is less useful. 
 However we can use the scanner concept to operate on wide rows. For example 
 many times a user wishes to do some custom processing inside a row and does 
 not wish to carry the data across the network to do this processing. 
 I have already implemented thrift methods to compile dynamic groovy code into 
 Filters as well as some code that uses a Filter to page through and process 
 data on the server side.
 https://github.com/edwardcapriolo/cassandra/compare/apache:trunk...trunk
 The following is a working code snippet.
 {code}
 @Test
 public void test_scanner() throws Exception
 {
   ColumnParent cp = new ColumnParent();
   cp.setColumn_family(Standard1);
   ByteBuffer key = ByteBuffer.wrap(rscannerkey.getBytes());
   for (char a='a'; a  'g'; a++){
 Column c1 = new Column();
 c1.setName((a+).getBytes());
 c1.setValue(new byte [0]);
 c1.setTimestamp(System.nanoTime());
 server.insert(key, cp, c1, ConsistencyLevel.ONE);
   }
   
   FilterDesc d = new FilterDesc();
   d.setSpec(GROOVY_CLASS_LOADER);
   d.setName(limit3);
   d.setCode(import org.apache.cassandra.dht.* \n +
   import org.apache.cassandra.thrift.* \n +
   public class Limit3 implements SFilter { \n  +
   public FilterReturn filter(ColumnOrSuperColumn col, 
 ListColumnOrSuperColumn filtered) {\n+
filtered.add(col);\n+
return filtered.size() 3 ? FilterReturn.FILTER_MORE : 
 FilterReturn.FILTER_DONE;\n+
   } \n +
 }\n);
   server.create_filter(d);
   
   
   ScannerResult res = server.create_scanner(Standard1, limit3, key, 
 ByteBuffer.wrap(a.getBytes()));
   Assert.assertEquals(3, res.results.size());
 }
 {code}
 I am going to be working on this code over the next few weeks but I wanted to 
 get the concept our early so the design can see some criticism.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6716) nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist)

2014-02-19 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905506#comment-13905506
 ] 

Nikolai Grigoriev commented on CASSANDRA-6716:
--

And more scary one:

{quote}
 WARN [CompactionExecutor:84] 2014-02-19 14:35:25,418 OutputHandler.java (line 
52) Unable to r
ecover 8 rows that were skipped.  You can attempt manual recovery from the 
pre-scrub snapshot.
  You can also run nodetool repair to transfer the data from a healthy replica, 
if any
{quote}

 nodetool scrub constantly fails with RuntimeException (Tried to hard link to 
 file that does not exist)
 --

 Key: CASSANDRA-6716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6716
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 
 1.7
Reporter: Nikolai Grigoriev
 Attachments: system.log.gz


 It seems that since recently I have started getting a number of exceptions 
 like File not found on all Cassandra nodes. Currently I am getting an 
 exception like this every couple of seconds on each node, for different 
 keyspaces and CFs.
 I have tried to restart the nodes, tried to scrub them. No luck so far. It 
 seems that scrub cannot complete on any of these nodes, at some point it 
 fails because of the file that it can't find.
 One one of the nodes currently the nodetool scrub command fails  instantly 
 and consistently with this exception:
 {code}
 # /opt/cassandra/bin/nodetool scrub 
 Exception in thread main java.lang.RuntimeException: Tried to hard link to 
 file that does not exist 
 /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db
   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122)
   at 
 org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 

[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-02-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905520#comment-13905520
 ] 

Joshua McKenzie commented on CASSANDRA-6283:


There's pros and cons to either side of the debate - there's some benefits to 
being able to change file handles while in use such as being able to run 
updates, file recovery, etc.  Apparently this is a throwback in OS design to 
MS-DOS 3.3 as far as file-level locking is concerned.

Coming from a C++ background myself I know what you mean w/smart pointers and 
RAII-type resource management, but I believe the problem we're running into 
here is language independent.  We have atomic reference-counting 
implementations in the SSTableReaders to allow multiple concurrent read-only 
access to the data structures which would be a complicated implementation 
regardless of language choice.

I mentioned process explorer not seeing the file handle lock because I found it 
an oddity - my expectation is that anything from Sysinternals and Russinovich 
is bullet-proof, so I'm wondering how the OS got into a state where a file is 
locked yet I can't query the process that has said lock, even though stopping 
the JVM clearly released it.

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: compaction
 Fix For: 2.0.6

 Attachments: leakdetect.patch, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6734) Use correct partitioner in AbstractViewSSTableFinder

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6734:
--

Attachment: 6734.txt

Thanks to Berenguer Blasi for spotting this.

 Use correct partitioner in AbstractViewSSTableFinder
 

 Key: CASSANDRA-6734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6734
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6734.txt


 I don't think this breaks anything yet since we don't do range queries 
 against index tables, but fixing it is a prereq for doing so (CASSANDRA-4476).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6734) Use correct partitioner in AbstractViewSSTableFinder

2014-02-19 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6734:
-

 Summary: Use correct partitioner in AbstractViewSSTableFinder
 Key: CASSANDRA-6734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6734
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.6
 Attachments: 6734.txt

I don't think this breaks anything yet since we don't do range queries against 
index tables, but fixing it is a prereq for doing so (CASSANDRA-4476).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6723) support for multiget_count in cql3

2014-02-19 Thread Michael Nelson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905549#comment-13905549
 ] 

Michael Nelson commented on CASSANDRA-6723:
---

Yes as per the use case I need counts for each feed not the sum off all. I have 
parallelized these calls in my application but I would prefer to have the 
parallel processing done on the server side as it seems more intuitive, 
efficient, and convenient for the end user.  I am not sure what you mean by you 
can do these calls natively unless you mean most clients support asynchronous 
queries (mine does not).

 support for multiget_count in cql3
 --

 Key: CASSANDRA-6723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6723
 Project: Cassandra
  Issue Type: New Feature
Reporter: Michael Nelson
Priority: Minor

 Use Case:
 I would like to get the number of articles published in a timeframe for 
 multiple rss feeds that we are ingesting into Cassandra wide rows.  Currently 
 individual count queries for each feed must be made.  
 I believe this could be accomplished using mutliget_count in the thrift 
 interface but there seems to be no viable alternative in the CQL interface.  
 Thanks:
 Cassandra is amazing, thanks for all the hard work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6722) cross-partition ordering should have warning or be disallowed when paging

2014-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6722:


Attachment: 6722.txt

Attaching a patch that does 2 things:
# remove the ordering of results for compact tables when a IN is on the last 
clustering column (see discussion above) so the only case where we do 
post-query reordering is with IN on partition key + ORDER BY.
# throw IRE if the query needs post-query reordering and paging is on.


 cross-partition ordering should have warning or be disallowed when paging
 -

 Key: CASSANDRA-6722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6722
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6722.txt


 consider this schema/data/query:
 {noformat}
 CREATE TABLE paging_test (
 id int,
 value text,
 PRIMARY KEY (id, value)
 ) WITH CLUSTERING ORDER BY (value ASC)
 |id|value|
 |1 |a|
 |2 |b|
 |1 |c|
 |2 |d| 
 |1 |e| 
 |2 |f| 
 |1 |g| 
 |2 |h|
 |1 |i|
 |2 |j|
 select * from paging_test where id in (1,2) order by value asc;
 {noformat}
 When paging the above query I get the sorted results from id=1 first, then 
 the sorted results from id=2 after that. I was testing this because I was 
 curious if the paging system could somehow globally sort the results but it 
 makes sense that we can't do that, since that would require all results to be 
 collated up front.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6722) cross-partition ordering should have warning or be disallowed when paging

2014-02-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905561#comment-13905561
 ] 

Sylvain Lebresne commented on CASSANDRA-6722:
-

Good catch. We can't indeed properly do post-query reordering in general if we 
page. Note that currently, there is 2 cases where we do post-query reordering:
# with a IN on the partition key and an ORDER BY (the example above).
# if there is an IN on the last clustering column of a compact table.

The 2nd case is actually a bit weird. The reason we order post-query is so the 
resultset follows the order of the IN in the query, i.e. if you have the table 
above but COMPACT and do
{noformat}
SELECT value FROM paging_test WHERE id=1 AND value IN ('b', 'a', 'c')
{noformat}
you will get [ 'b', 'a', 'c' ] in that order as a result set. So far, why not, 
but there is 2 problems in practice:
* we only do that for compact tables. For non-compact ones, we just return 
results in clustering order (we don't do the post-query ordering). That 
inconsistency is an historical accident.
* this actually take precedence over ORDER BY, which is completely broken. I.e. 
even if you add 'ORDER BY value' in the query above, the results will not be 
properly ordered.
* this is even more broken than that: CASSANDRA-6701.
This case is a mess. So because it's a problem in the context of this ticket 
and because there is no reason to guarantee any special ordering of the result 
set if there is no ORDER BY, I suggest we just remove the behavior for compact 
storage (it was an implementation detail so far) to make it in line with the 
non-compact case, thus avoiding us to have to deal with it here.

Back to the more interesting case of the example in the description: a IN on 
the partition key with ORDER BY. In that case, we could almost support paging 
properly: the reason it's broken is that the pager queries partitions of the IN 
one after the other.  But the pager could, in theory, page over each partition 
simultaneously, querying them all little by little and doing a merge sort.

It is however not all that easy in practice. If you query a full page of each 
partition and there is many partitions in the IN, you'll load tons of data in 
memory, defeating in large parts the goal of paging. If you instead query less 
than the page size of each partition, you now may need to re-query some of the 
partitions depending on what the merge sort yield on those first pages. Not 
only would that require serious refactor of the current code to properly 
handle, but it's rather unclear how efficient this would really be in general. 
I'm not sure it's really worth it in the end. 

But in any case, such solution is way out of scope for 2.0 (and probably even 
for 2.1 at this point). So we need a quick solution for now and it probably 
mean that we should throw an IRE if the query requires post-query paging and 
paging is on.

Taking a step back, I wonder if allowing post-query reordering by default was a 
good idea, and if the same way we have ALLOW FILTERING, we shouldn't have 
required an ALLOW IN-MEMORY REORDERING for such queries. Of course, it's harder 
to change that now, but maybe we could still do it by adding such flag, 
deprecate queries that don't use it by logging a warning like we've done in 
CASSANDRA-6649 for 1 or 2 versions, and forbid it completely afterwards?


 cross-partition ordering should have warning or be disallowed when paging
 -

 Key: CASSANDRA-6722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6722
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6722.txt


 consider this schema/data/query:
 {noformat}
 CREATE TABLE paging_test (
 id int,
 value text,
 PRIMARY KEY (id, value)
 ) WITH CLUSTERING ORDER BY (value ASC)
 |id|value|
 |1 |a|
 |2 |b|
 |1 |c|
 |2 |d| 
 |1 |e| 
 |2 |f| 
 |1 |g| 
 |2 |h|
 |1 |i|
 |2 |j|
 select * from paging_test where id in (1,2) order by value asc;
 {noformat}
 When paging the above query I get the sorted results from id=1 first, then 
 the sorted results from id=2 after that. I was testing this because I was 
 curious if the paging system could somehow globally sort the results but it 
 makes sense that we can't do that, since that would require all results to be 
 collated up front.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6731:


Attachment: 6731.traces.txt

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905566#comment-13905566
 ] 

Jonathan Ellis commented on CASSANDRA-6731:
---

So that's a yes.  How hard are you hitting the cluster here?

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905570#comment-13905570
 ] 

Ryan McGuire commented on CASSANDRA-6731:
-

I've attached a script (ticket_6731.py) that runs a few tests:

* local_read_speed_test_without_shutdown
* local_read_speed_test_with_shutdown
* test_local_read_trace

The first two perform a small speed run comparing the case of one node vs three 
nodes. The single node test was a few seconds faster (average 37.8s for 1 node 
vs 42.5s for 3 nodes.)

test_local_read_trace does things more methodically, and runs a trace on each 
query to determine what percentage of queries actually contact nodes other than 
the coordinator. Of a sample set of 1000 queries, 10.7% of them contacted other 
nodes. You can see those traces in 6731.traces.txt.

So, I do concur that a non-negligible amount of the queries do contact other 
nodes, but it doesn't appear to have much of a performance impact. This test 
was done on a local ccm cluster, so it may warrant testing on a physical 
cluster with different data loads.

[~arisp] can you post your schema and your approximate data size?

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905574#comment-13905574
 ] 

Jonathan Ellis commented on CASSANDRA-6731:
---

bq. Of a sample set of 1000 queries, 10.7% of them contacted other nodes

That's suspiciously close to default 10% read repair chance.  Can you try after 
disabling that?

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905579#comment-13905579
 ] 

Ryan McGuire commented on CASSANDRA-6731:
-

bq. So that's a yes. How hard are you hitting the cluster here?

Single thread, only contacting one coordinator.

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6734) Use correct partitioner in AbstractViewSSTableFinder

2014-02-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905584#comment-13905584
 ] 

Yuki Morishita commented on CASSANDRA-6734:
---

+1

 Use correct partitioner in AbstractViewSSTableFinder
 

 Key: CASSANDRA-6734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6734
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6734.txt


 I don't think this breaks anything yet since we don't do range queries 
 against index tables, but fixing it is a prereq for doing so (CASSANDRA-4476).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[01/12] git commit: Support negative timestamps in DateType.fromString

2014-02-19 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 55d8da44b - 84103bbe2
  refs/heads/cassandra-2.1 5afd2bd48 - 8e101bef0
  refs/heads/trunk 89b8b1a75 - 97a529f06


Support negative timestamps in DateType.fromString

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6718


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e787b7a4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e787b7a4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e787b7a4

Branch: refs/heads/cassandra-2.1
Commit: e787b7a4c7f69cf486c7d5b6c53bfb88086b5261
Parents: 6dfca3d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:41:21 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:41:21 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f146166..47fc3a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+ * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e787b7a4/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 875169d..ad165ee 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -92,7 +92,7 @@ public class DateType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (source.matches(^-?\\d+$))
   {
   try
   {



[05/12] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55d8da44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55d8da44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55d8da44

Branch: refs/heads/cassandra-2.1
Commit: 55d8da44bc270c1dcaef013dec287e9cb9b6865f
Parents: 0520aeb c92b20b
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:33:25 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:33:25 2014 -0600

--

--




[11/12] git commit: merge from 2.0

2014-02-19 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e101bef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e101bef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e101bef

Branch: refs/heads/cassandra-2.1
Commit: 8e101bef056eb00173f51ec7fb6e3b6b251d105d
Parents: 5afd2bd 84103bb
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 10:01:10 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 10:01:10 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java| 21 --
 .../apache/cassandra/db/marshal/DateType.java   |  2 +-
 4 files changed, 25 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ca4ff0a,f25f934..76160ea
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -1708,43 -1448,7 +1708,33 @@@ public class ColumnFamilyStore implemen
  return markCurrentViewReferenced().sstables;
  }
  
 +public SetSSTableReader getUnrepairedSSTables()
 +{
 +SetSSTableReader unRepairedSSTables = new HashSet(getSSTables());
 +IteratorSSTableReader sstableIterator = 
unRepairedSSTables.iterator();
 +while(sstableIterator.hasNext())
 +{
 +SSTableReader sstable = sstableIterator.next();
 +if (sstable.isRepaired())
 +sstableIterator.remove();
 +}
 +return unRepairedSSTables;
 +}
 +
 +public SetSSTableReader getRepairedSSTables()
 +{
 +SetSSTableReader repairedSSTables = new HashSet(getSSTables());
 +IteratorSSTableReader sstableIterator = repairedSSTables.iterator();
 +while(sstableIterator.hasNext())
 +{
 +SSTableReader sstable = sstableIterator.next();
 +if (!sstable.isRepaired())
 +sstableIterator.remove();
 +}
 +return repairedSSTables;
 +}
 +
- abstract class AbstractViewSSTableFinder
- {
- abstract ListSSTableReader findSSTables(DataTracker.View view);
- protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
- {
- RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
- return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
- }
- }
- 
- private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
+ private ViewFragment markReferenced(FunctionDataTracker.View, 
ListSSTableReader filter)
  {
  ListSSTableReader sstables;
  DataTracker.View view;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --cc src/java/org/apache/cassandra/db/DataTracker.java
index e51f380,c1ae00f..30bd360
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@@ -32,12 -29,14 +29,14 @@@ import org.slf4j.LoggerFactory
  
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.db.compaction.OperationType;
+ import org.apache.cassandra.dht.AbstractBounds;
 -import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.SSTableReader;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.metrics.StorageMetrics;
  import org.apache.cassandra.notifications.*;
  import org.apache.cassandra.utils.Interval;
  import org.apache.cassandra.utils.IntervalTree;
++import org.apache.cassandra.utils.concurrent.OpOrder;
  
  public class DataTracker
  {
@@@ -317,47 -322,11 +316,47 @@@
  /** (Re)initializes the tracker, purging all references. */
  void init()
  {
 -view.set(new View(new Memtable(cfstore),
 -  Collections.MemtableemptySet(),
 -  Collections.SSTableReaderemptySet(),
 -  Collections.SSTableReaderemptySet(),
 -  SSTableIntervalTree.empty()));
 +view.set(new View(
- ImmutableList.of(new 

[12/12] git commit: Merge branch 'cassandra-2.1' into trunk

2014-02-19 Thread jbellis
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97a529f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97a529f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97a529f0

Branch: refs/heads/trunk
Commit: 97a529f0662f92b6539dbff426ca252ad1713954
Parents: 89b8b1a 8e101be
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 10:01:19 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 10:01:19 2014 -0600

--
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java| 21 --
 2 files changed, 23 insertions(+), 28 deletions(-)
--




[04/12] git commit: Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1 patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732

2014-02-19 Thread jbellis
Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1
patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c92b20b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c92b20b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c92b20b3

Branch: refs/heads/trunk
Commit: c92b20b3073f1c5cca3666225db33ea102ba77b5
Parents: e787b7a
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:32:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:32:40 2014 -0600

--
 CHANGES.txt | 8 ++--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47fc3a3..ffda82c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.2.16
+ * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
+   (CASSANDRA-6732)
  * Fix broken streams when replacing with same IP (CASSANDRA-6622)
  * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
  * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
@@ -6,8 +8,10 @@
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
  * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
- * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
- * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
+ * Use real node messaging versions for schema exchange decisions 
+   (CASSANDRA-6700)
+ * IN on the last clustering columns + ORDER BY DESC yield no results 
+   (CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index dbe029b..ca82a1f 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -614,7 +614,7 @@ public class StorageProxy implements StorageProxyMBean
 InetAddress target = iter.next();
 
 // direct writes to local DC or old Cassandra versions
-if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
+if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_12)
 {
 // yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 // creating a second iterator since we already have a perfectly 
good one



[09/12] git commit: Use correct partitioner in AbstractViewSSTableFinder patch by jbellis; reviewed by yukim for CASSANDRA-6734

2014-02-19 Thread jbellis
Use correct partitioner in AbstractViewSSTableFinder
patch by jbellis; reviewed by yukim for CASSANDRA-6734


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84103bbe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84103bbe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84103bbe

Branch: refs/heads/trunk
Commit: 84103bbe2894706d224dc5975ca6bdaaa6f7f6c4
Parents: 55d8da4
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 09:58:28 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 09:58:28 2014 -0600

--
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java|  7 +
 2 files changed, 17 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/84103bbe/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 38d87db..f25f934 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1448,17 +1448,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return markCurrentViewReferenced().sstables;
 }
 
-abstract class AbstractViewSSTableFinder
-{
-abstract ListSSTableReader findSSTables(DataTracker.View view);
-protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
-{
-RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
-return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
-}
-}
-
-private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
+private ViewFragment markReferenced(FunctionDataTracker.View, 
ListSSTableReader filter)
 {
 ListSSTableReader sstables;
 DataTracker.View view;
@@ -1473,7 +1463,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 break;
 }
 
-sstables = finder.findSSTables(view);
+sstables = filter.apply(view);
 if (SSTableReader.acquireReferences(sstables))
 break;
 // retry w/ new view
@@ -1489,9 +1479,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 public ViewFragment markReferenced(final DecoratedKey key)
 {
 assert !key.isMinimum();
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 return 
compactionStrategy.filterSSTablesForReads(view.intervalTree.search(key));
 }
@@ -1504,11 +1494,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final AbstractBoundsRowPosition 
rowBounds)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
-return 
compactionStrategy.filterSSTablesForReads(sstablesForRowBounds(rowBounds, 
view));
+return 
compactionStrategy.filterSSTablesForReads(view.sstablesInBounds(rowBounds));
 }
 });
 }
@@ -1519,13 +1509,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final 
CollectionAbstractBoundsRowPosition rowBoundsCollection)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 SetSSTableReader sstables = Sets.newHashSet();
 for (AbstractBoundsRowPosition rowBounds : 
rowBoundsCollection)
-sstables.addAll(sstablesForRowBounds(rowBounds, view));
+sstables.addAll(view.sstablesInBounds(rowBounds));
 
 return ImmutableList.copyOf(sstables);
 }


[06/12] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55d8da44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55d8da44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55d8da44

Branch: refs/heads/trunk
Commit: 55d8da44bc270c1dcaef013dec287e9cb9b6865f
Parents: 0520aeb c92b20b
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:33:25 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:33:25 2014 -0600

--

--




[08/12] git commit: Use correct partitioner in AbstractViewSSTableFinder patch by jbellis; reviewed by yukim for CASSANDRA-6734

2014-02-19 Thread jbellis
Use correct partitioner in AbstractViewSSTableFinder
patch by jbellis; reviewed by yukim for CASSANDRA-6734


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84103bbe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84103bbe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84103bbe

Branch: refs/heads/cassandra-2.0
Commit: 84103bbe2894706d224dc5975ca6bdaaa6f7f6c4
Parents: 55d8da4
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 09:58:28 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 09:58:28 2014 -0600

--
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java|  7 +
 2 files changed, 17 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/84103bbe/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 38d87db..f25f934 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1448,17 +1448,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return markCurrentViewReferenced().sstables;
 }
 
-abstract class AbstractViewSSTableFinder
-{
-abstract ListSSTableReader findSSTables(DataTracker.View view);
-protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
-{
-RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
-return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
-}
-}
-
-private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
+private ViewFragment markReferenced(FunctionDataTracker.View, 
ListSSTableReader filter)
 {
 ListSSTableReader sstables;
 DataTracker.View view;
@@ -1473,7 +1463,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 break;
 }
 
-sstables = finder.findSSTables(view);
+sstables = filter.apply(view);
 if (SSTableReader.acquireReferences(sstables))
 break;
 // retry w/ new view
@@ -1489,9 +1479,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 public ViewFragment markReferenced(final DecoratedKey key)
 {
 assert !key.isMinimum();
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 return 
compactionStrategy.filterSSTablesForReads(view.intervalTree.search(key));
 }
@@ -1504,11 +1494,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final AbstractBoundsRowPosition 
rowBounds)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
-return 
compactionStrategy.filterSSTablesForReads(sstablesForRowBounds(rowBounds, 
view));
+return 
compactionStrategy.filterSSTablesForReads(view.sstablesInBounds(rowBounds));
 }
 });
 }
@@ -1519,13 +1509,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final 
CollectionAbstractBoundsRowPosition rowBoundsCollection)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 SetSSTableReader sstables = Sets.newHashSet();
 for (AbstractBoundsRowPosition rowBounds : 
rowBoundsCollection)
-sstables.addAll(sstablesForRowBounds(rowBounds, view));
+sstables.addAll(view.sstablesInBounds(rowBounds));
 
 return ImmutableList.copyOf(sstables);
 }


[03/12] git commit: Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1 patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732

2014-02-19 Thread jbellis
Don't attempt cross-dc forwarding interface/mixed-version cluster with 1.1
patch by Jeremiah Jordan; reviewed by jbellis for CASSANDRA-6732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c92b20b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c92b20b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c92b20b3

Branch: refs/heads/cassandra-2.1
Commit: c92b20b3073f1c5cca3666225db33ea102ba77b5
Parents: e787b7a
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 08:32:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 08:32:40 2014 -0600

--
 CHANGES.txt | 8 ++--
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47fc3a3..ffda82c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.2.16
+ * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
+   (CASSANDRA-6732)
  * Fix broken streams when replacing with same IP (CASSANDRA-6622)
  * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
  * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
@@ -6,8 +8,10 @@
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
  * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
- * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
- * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
+ * Use real node messaging versions for schema exchange decisions 
+   (CASSANDRA-6700)
+ * IN on the last clustering columns + ORDER BY DESC yield no results 
+   (CASSANDRA-6701)
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c92b20b3/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index dbe029b..ca82a1f 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -614,7 +614,7 @@ public class StorageProxy implements StorageProxyMBean
 InetAddress target = iter.next();
 
 // direct writes to local DC or old Cassandra versions
-if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_11)
+if (localDC || MessagingService.instance().getVersion(target)  
MessagingService.VERSION_12)
 {
 // yes, the loop and non-loop code here are the same; this is 
clunky but we want to avoid
 // creating a second iterator since we already have a perfectly 
good one



[07/12] git commit: Use correct partitioner in AbstractViewSSTableFinder patch by jbellis; reviewed by yukim for CASSANDRA-6734

2014-02-19 Thread jbellis
Use correct partitioner in AbstractViewSSTableFinder
patch by jbellis; reviewed by yukim for CASSANDRA-6734


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84103bbe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84103bbe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84103bbe

Branch: refs/heads/cassandra-2.1
Commit: 84103bbe2894706d224dc5975ca6bdaaa6f7f6c4
Parents: 55d8da4
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 09:58:28 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 09:58:28 2014 -0600

--
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java|  7 +
 2 files changed, 17 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/84103bbe/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 38d87db..f25f934 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1448,17 +1448,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return markCurrentViewReferenced().sstables;
 }
 
-abstract class AbstractViewSSTableFinder
-{
-abstract ListSSTableReader findSSTables(DataTracker.View view);
-protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
-{
-RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
-return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
-}
-}
-
-private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
+private ViewFragment markReferenced(FunctionDataTracker.View, 
ListSSTableReader filter)
 {
 ListSSTableReader sstables;
 DataTracker.View view;
@@ -1473,7 +1463,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 break;
 }
 
-sstables = finder.findSSTables(view);
+sstables = filter.apply(view);
 if (SSTableReader.acquireReferences(sstables))
 break;
 // retry w/ new view
@@ -1489,9 +1479,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 public ViewFragment markReferenced(final DecoratedKey key)
 {
 assert !key.isMinimum();
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 return 
compactionStrategy.filterSSTablesForReads(view.intervalTree.search(key));
 }
@@ -1504,11 +1494,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final AbstractBoundsRowPosition 
rowBounds)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
-return 
compactionStrategy.filterSSTablesForReads(sstablesForRowBounds(rowBounds, 
view));
+return 
compactionStrategy.filterSSTablesForReads(view.sstablesInBounds(rowBounds));
 }
 });
 }
@@ -1519,13 +1509,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  */
 public ViewFragment markReferenced(final 
CollectionAbstractBoundsRowPosition rowBoundsCollection)
 {
-return markReferenced(new AbstractViewSSTableFinder()
+return markReferenced(new FunctionDataTracker.View, 
ListSSTableReader()
 {
-ListSSTableReader findSSTables(DataTracker.View view)
+public ListSSTableReader apply(DataTracker.View view)
 {
 SetSSTableReader sstables = Sets.newHashSet();
 for (AbstractBoundsRowPosition rowBounds : 
rowBoundsCollection)
-sstables.addAll(sstablesForRowBounds(rowBounds, view));
+sstables.addAll(view.sstablesInBounds(rowBounds));
 
 return ImmutableList.copyOf(sstables);
 }


[10/12] git commit: merge from 2.0

2014-02-19 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e101bef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e101bef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e101bef

Branch: refs/heads/trunk
Commit: 8e101bef056eb00173f51ec7fb6e3b6b251d105d
Parents: 5afd2bd 84103bb
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 19 10:01:10 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 19 10:01:10 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 30 +++-
 .../org/apache/cassandra/db/DataTracker.java| 21 --
 .../apache/cassandra/db/marshal/DateType.java   |  2 +-
 4 files changed, 25 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ca4ff0a,f25f934..76160ea
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -1708,43 -1448,7 +1708,33 @@@ public class ColumnFamilyStore implemen
  return markCurrentViewReferenced().sstables;
  }
  
 +public SetSSTableReader getUnrepairedSSTables()
 +{
 +SetSSTableReader unRepairedSSTables = new HashSet(getSSTables());
 +IteratorSSTableReader sstableIterator = 
unRepairedSSTables.iterator();
 +while(sstableIterator.hasNext())
 +{
 +SSTableReader sstable = sstableIterator.next();
 +if (sstable.isRepaired())
 +sstableIterator.remove();
 +}
 +return unRepairedSSTables;
 +}
 +
 +public SetSSTableReader getRepairedSSTables()
 +{
 +SetSSTableReader repairedSSTables = new HashSet(getSSTables());
 +IteratorSSTableReader sstableIterator = repairedSSTables.iterator();
 +while(sstableIterator.hasNext())
 +{
 +SSTableReader sstable = sstableIterator.next();
 +if (!sstable.isRepaired())
 +sstableIterator.remove();
 +}
 +return repairedSSTables;
 +}
 +
- abstract class AbstractViewSSTableFinder
- {
- abstract ListSSTableReader findSSTables(DataTracker.View view);
- protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
- {
- RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
- return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
- }
- }
- 
- private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
+ private ViewFragment markReferenced(FunctionDataTracker.View, 
ListSSTableReader filter)
  {
  ListSSTableReader sstables;
  DataTracker.View view;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e101bef/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --cc src/java/org/apache/cassandra/db/DataTracker.java
index e51f380,c1ae00f..30bd360
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@@ -32,12 -29,14 +29,14 @@@ import org.slf4j.LoggerFactory
  
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.db.compaction.OperationType;
+ import org.apache.cassandra.dht.AbstractBounds;
 -import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.SSTableReader;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.metrics.StorageMetrics;
  import org.apache.cassandra.notifications.*;
  import org.apache.cassandra.utils.Interval;
  import org.apache.cassandra.utils.IntervalTree;
++import org.apache.cassandra.utils.concurrent.OpOrder;
  
  public class DataTracker
  {
@@@ -317,47 -322,11 +316,47 @@@
  /** (Re)initializes the tracker, purging all references. */
  void init()
  {
 -view.set(new View(new Memtable(cfstore),
 -  Collections.MemtableemptySet(),
 -  Collections.SSTableReaderemptySet(),
 -  Collections.SSTableReaderemptySet(),
 -  SSTableIntervalTree.empty()));
 +view.set(new View(
- ImmutableList.of(new 

[02/12] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0520aeb7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0520aeb7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0520aeb7

Branch: refs/heads/cassandra-2.1
Commit: 0520aeb7e751626b62268c1495c941d33b01cdfb
Parents: e67a0a9 e787b7a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 19 11:42:25 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 19 11:42:25 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/marshal/DateType.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/CHANGES.txt
--
diff --cc CHANGES.txt
index 95faf18,47fc3a3..2cacbaa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,34 -10,24 +30,35 @@@ Merged from 1.2
   * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
+  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0520aeb7/src/java/org/apache/cassandra/db/marshal/DateType.java
--



[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905589#comment-13905589
 ] 

Brandon Williams commented on CASSANDRA-6731:
-

Can someone comment on the impetus to set dynamic_snitch_badness_threshold to 
zero?  I suspect this is some kind of attempt to disable the dsnitch, when in 
fact it's telling it to go wild and pick whatever has the best score regardless 
of the difference.  To disable the dsnitch you'd want to add 'dynamic_snitch: 
false'

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905638#comment-13905638
 ] 

Ryan McGuire commented on CASSANDRA-6731:
-

Reran the tests with new settings on a reduced sample set of queries (30):

 * With cassandra defaults: 3% of queries  (1 of 30) contacted other nodes
 * With cassandra defaults, but dsnitch disabled:  3% of queries  (1 of 30) 
contacted other nodes
 * With dynamic_snitch_badness_threshold:0  : 13% of queries (4 of 30) went to 
other nodes


 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905642#comment-13905642
 ] 

Jonathan Ellis commented on CASSANDRA-6731:
---

Do you mean, defaults but rrc=0?

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905645#comment-13905645
 ] 

Ryan McGuire commented on CASSANDRA-6731:
-

Yes, read repair is off for all of these, just meant the yaml defaults.

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Aris Prassinos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905657#comment-13905657
 ] 

Aris Prassinos commented on CASSANDRA-6731:
---

Is the dynamic_snitch a documented parameter?
Also does this apply to the PropertyFileSnitch that I'm using? 

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Ryan McGuire
 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6456) log cassandra.yaml at startup

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6456:
--

Summary: log cassandra.yaml at startup  (was: log listen address at startup)

 log cassandra.yaml at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Fix For: 2.1

 Attachments: 6456_v4_trunk.patch, CASSANDRA-6456-2.patch, 
 CASSANDRA-6456-3.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6456) log listen address and other cassandra.yaml config at startup

2014-02-19 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-6456:


Summary: log listen address and other cassandra.yaml config at startup  
(was: log cassandra.yaml at startup)

 log listen address and other cassandra.yaml config at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Fix For: 2.1

 Attachments: 6456_v4_trunk.patch, CASSANDRA-6456-2.patch, 
 CASSANDRA-6456-3.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6456) log cassandra.yaml at startup

2014-02-19 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-6456:


Summary: log cassandra.yaml at startup  (was: log listen address and other 
cassandra.yaml config at startup)

 log cassandra.yaml at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Fix For: 2.1

 Attachments: 6456_v4_trunk.patch, CASSANDRA-6456-2.patch, 
 CASSANDRA-6456-3.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6731) Requests unnecessarily redirected with CL=ONE

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6731:
--

 Reviewer: Brandon Williams
 Priority: Minor  (was: Major)
   Tester: Ryan McGuire
Fix Version/s: 2.0.6
 Assignee: Tyler Hobbs  (was: Ryan McGuire)

 Requests unnecessarily redirected with CL=ONE
 -

 Key: CASSANDRA-6731
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6731
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Aris Prassinos
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6731.traces.txt, ticket_6731.py


 Three-node cluster with RF=3. All data currently in sync.
 Network topology strategy. Each node defined to be on a different rack.
 endpoint_snitch: PropertyFileSnitch
 dynamic_snitch_update_interval_in_ms: 100
 dynamic_snitch_reset_interval_in_ms: 60
 dynamic_snitch_badness_threshold: 0
 All tables defined with read_repair_chance=0
 From cqlsh when querying with Consistency=ONE tracing shows that requests get 
 frequently redirected to other nodes even though the local node has the data.
 There is no other activity on the cluster besides my test queries.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-2434) range movements can violate consistency

2014-02-19 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-2434:
-

Assignee: T Jake Luciani

 range movements can violate consistency
 ---

 Key: CASSANDRA-2434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2434
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Schuller
Assignee: T Jake Luciani
 Fix For: 2.1

 Attachments: 2434-3.patch.txt, 2434-testery.patch.txt


 My reading (a while ago) of the code indicates that there is no logic 
 involved during bootstrapping that avoids consistency level violations. If I 
 recall correctly it just grabs neighbors that are currently up.
 There are at least two issues I have with this behavior:
 * If I have a cluster where I have applications relying on QUORUM with RF=3, 
 and bootstrapping complete based on only one node, I have just violated the 
 supposedly guaranteed consistency semantics of the cluster.
 * Nodes can flap up and down at any time, so even if a human takes care to 
 look at which nodes are up and things about it carefully before 
 bootstrapping, there's no guarantee.
 A complication is that not only does it depend on use-case where this is an 
 issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster 
 which is otherwise used for QUORUM operations you may wish to accept 
 less-than-quorum nodes during bootstrap in various emergency situations.
 A potential easy fix is to have bootstrap take an argument which is the 
 number of hosts to bootstrap from, or to assume QUORUM if none is given.
 (A related concern is bootstrapping across data centers. You may *want* to 
 bootstrap to a local node and then do a repair to avoid sending loads of data 
 across DC:s while still achieving consistency. Or even if you don't care 
 about the consistency issues, I don't think there is currently a way to 
 bootstrap from local nodes only.)
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-2434) range movements can violate consistency

2014-02-19 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905719#comment-13905719
 ] 

T Jake Luciani commented on CASSANDRA-2434:
---

I've taken a crack at this, initially for 1.2 since it solves my pain. 
Appreciate a review.

As [~jbellis] mentions above it requires only one node to be added at a time. 
Also bootstrapping node must add -Dconsistent.bootstrap=true

#code
https://github.com/tjake/cassandra/tree/2434

#dtest showing it works (use ENABLE_VNODES=yes)
https://github.com/tjake/cassandra-dtest/tree/2434



 range movements can violate consistency
 ---

 Key: CASSANDRA-2434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2434
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Schuller
Assignee: T Jake Luciani
 Fix For: 2.1

 Attachments: 2434-3.patch.txt, 2434-testery.patch.txt


 My reading (a while ago) of the code indicates that there is no logic 
 involved during bootstrapping that avoids consistency level violations. If I 
 recall correctly it just grabs neighbors that are currently up.
 There are at least two issues I have with this behavior:
 * If I have a cluster where I have applications relying on QUORUM with RF=3, 
 and bootstrapping complete based on only one node, I have just violated the 
 supposedly guaranteed consistency semantics of the cluster.
 * Nodes can flap up and down at any time, so even if a human takes care to 
 look at which nodes are up and things about it carefully before 
 bootstrapping, there's no guarantee.
 A complication is that not only does it depend on use-case where this is an 
 issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster 
 which is otherwise used for QUORUM operations you may wish to accept 
 less-than-quorum nodes during bootstrap in various emergency situations.
 A potential easy fix is to have bootstrap take an argument which is the 
 number of hosts to bootstrap from, or to assume QUORUM if none is given.
 (A related concern is bootstrapping across data centers. You may *want* to 
 bootstrap to a local node and then do a repair to avoid sending loads of data 
 across DC:s while still achieving consistency. Or even if you don't care 
 about the consistency issues, I don't think there is currently a way to 
 bootstrap from local nodes only.)
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6660) Make node tool command take a password file

2014-02-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905720#comment-13905720
 ] 

sankalp kohli commented on CASSANDRA-6660:
--

tested v3 and it works. We can check it in. 

 Make node tool command take a password file
 ---

 Key: CASSANDRA-6660
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6660
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Clément Lardeur
Priority: Trivial
  Labels: nodetool
 Fix For: 2.1

 Attachments: trunk-6660-v1.patch, trunk-6660-v2.patch, 
 trunk-6660-v3.patch


 We are sending the jmx password in the clear to the node tool command in 
 production. This is a security risk. Any one doing a 'ps' can see the clear 
 password. Can we change the node tool command to also take a password file 
 argument. This file will list the JMX user and passwords. Example below:
 cat /cassandra/run/10003004.jmxpasswd
 monitorRole abc
 controlRole def
 Based on the user name provided, node tool can pick up the right password. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6475) Control nodetool history logging directory

2014-02-19 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-6475:
-

Attachment: trunk-6475.diff

The patch removes the hard coded path to user home directory. it gives an 
option to set that directory like we have in node tool. 

 Control nodetool history logging directory
 --

 Key: CASSANDRA-6475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6475
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Brian Nixon
Assignee: sankalp kohli
Priority: Trivial
  Labels: lhf
 Attachments: trunk-6475.diff


 Nodetool history is logged to a directory based on the current user home. 
 This leads to splintering of history with more than one user and, in one 
 case, a failure to run any nodetool commands as the user did not have write 
 access to their home directory.
 Suggested fix is to make the base directory for the history logging (both 
 nodetool and cli) configurable. A way to disable the logging of these tools 
 would also help.
 Reference:
 https://issues.apache.org/jira/browse/CASSANDRA-5823



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-02-19 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/420fd011
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/420fd011
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/420fd011

Branch: refs/heads/trunk
Commit: 420fd01101fc0cb1be2f43981a6931580bfcc833
Parents: 97a529f 657e160
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 19 11:37:34 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 19 11:37:34 2014 -0600

--
 CHANGES.txt |  3 ++
 .../org/apache/cassandra/tools/NodeTool.java| 56 ++--
 2 files changed, 54 insertions(+), 5 deletions(-)
--




[jira] [Comment Edited] (CASSANDRA-6475) Control nodetool history logging directory

2014-02-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905734#comment-13905734
 ] 

sankalp kohli edited comment on CASSANDRA-6475 at 2/19/14 5:38 PM:
---

The patch removes the hard coded path to user home directory in cqlsh. it gives 
an option to set that directory like we have in node tool. 


was (Author: kohlisankalp):
The patch removes the hard coded path to user home directory. it gives an 
option to set that directory like we have in node tool. 

 Control nodetool history logging directory
 --

 Key: CASSANDRA-6475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6475
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Brian Nixon
Assignee: sankalp kohli
Priority: Trivial
  Labels: lhf
 Attachments: trunk-6475.diff


 Nodetool history is logged to a directory based on the current user home. 
 This leads to splintering of history with more than one user and, in one 
 case, a failure to run any nodetool commands as the user did not have write 
 access to their home directory.
 Suggested fix is to make the base directory for the history logging (both 
 nodetool and cli) configurable. A way to disable the logging of these tools 
 would also help.
 Reference:
 https://issues.apache.org/jira/browse/CASSANDRA-5823



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[1/3] git commit: Allow nodetool to use a file/prompt for password Patch by Clément Lardeur, reviewed by Sankalp Kohli for CASSANDRA-6660

2014-02-19 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 8e101bef0 - 657e16006
  refs/heads/trunk 97a529f06 - 420fd0110


Allow nodetool to use a file/prompt for password
Patch by Clément Lardeur, reviewed by Sankalp Kohli for CASSANDRA-6660


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/657e1600
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/657e1600
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/657e1600

Branch: refs/heads/cassandra-2.1
Commit: 657e16006e56b52cdb06a6014e4a2a8bfd87d77c
Parents: 8e101be
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 19 11:35:35 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 19 11:37:24 2014 -0600

--
 CHANGES.txt |  3 ++
 .../org/apache/cassandra/tools/NodeTool.java| 56 ++--
 2 files changed, 54 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/657e1600/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 23a5173..583245a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.0-beta2
+ * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
+
 2.1.0-beta1
  * Add flush directory distinct from compaction directories (CASSANDRA-6357)
  * Require JNA by default (CASSANDRA-6575)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/657e1600/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index a12efac..fedf2c1 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -17,9 +17,7 @@
  */
 package org.apache.cassandra.tools;
 
-import java.io.File;
-import java.io.FileWriter;
-import java.io.IOException;
+import java.io.*;
 import java.lang.management.MemoryUsage;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
@@ -60,8 +58,7 @@ import static com.google.common.collect.Lists.newArrayList;
 import static java.lang.Integer.parseInt;
 import static java.lang.String.format;
 import static org.apache.commons.lang3.ArrayUtils.EMPTY_STRING_ARRAY;
-import static org.apache.commons.lang3.StringUtils.EMPTY;
-import static org.apache.commons.lang3.StringUtils.join;
+import static org.apache.commons.lang3.StringUtils.*;
 
 public class NodeTool
 {
@@ -223,9 +220,20 @@ public class NodeTool
 @Option(type = OptionType.GLOBAL, name = {-pw, --password}, 
description = Remote jmx agent password)
 private String password = EMPTY;
 
+@Option(type = OptionType.GLOBAL, name = {-pwf, --password-file}, 
description = Path to the JMX password file)
+private String passwordFilePath = EMPTY;
+
 @Override
 public void run()
 {
+if (isNotEmpty(username)) {
+if (isNotEmpty(passwordFilePath))
+password = readUserPasswordFromFile(username, 
passwordFilePath);
+
+if (isEmpty(password))
+password = promptAndReadPassword();
+}
+
 try (NodeProbe probe = connect())
 {
 execute(probe);
@@ -236,6 +244,44 @@ public class NodeTool
 
 }
 
+private String readUserPasswordFromFile(String username, String 
passwordFilePath) {
+String password = EMPTY;
+
+File passwordFile = new File(passwordFilePath);
+try (Scanner scanner = new 
Scanner(passwordFile).useDelimiter(\\s+))
+{
+while (scanner.hasNextLine())
+{
+if (scanner.hasNext())
+{
+String jmxRole = scanner.next();
+if (jmxRole.equals(username)  scanner.hasNext())
+{
+password = scanner.next();
+break;
+}
+}
+scanner.nextLine();
+}
+} catch (FileNotFoundException e)
+{
+throw new RuntimeException(e);
+}
+
+return password;
+}
+
+private String promptAndReadPassword()
+{
+String password = EMPTY;
+
+Console console = System.console();
+if (console != null)
+password = String.valueOf(console.readPassword(Password:));
+
+return password;
+}
+
 protected abstract void execute(NodeProbe probe);
 
 private 

[2/3] git commit: Allow nodetool to use a file/prompt for password Patch by Clément Lardeur, reviewed by Sankalp Kohli for CASSANDRA-6660

2014-02-19 Thread brandonwilliams
Allow nodetool to use a file/prompt for password
Patch by Clément Lardeur, reviewed by Sankalp Kohli for CASSANDRA-6660


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/657e1600
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/657e1600
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/657e1600

Branch: refs/heads/trunk
Commit: 657e16006e56b52cdb06a6014e4a2a8bfd87d77c
Parents: 8e101be
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 19 11:35:35 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 19 11:37:24 2014 -0600

--
 CHANGES.txt |  3 ++
 .../org/apache/cassandra/tools/NodeTool.java| 56 ++--
 2 files changed, 54 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/657e1600/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 23a5173..583245a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.0-beta2
+ * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
+
 2.1.0-beta1
  * Add flush directory distinct from compaction directories (CASSANDRA-6357)
  * Require JNA by default (CASSANDRA-6575)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/657e1600/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index a12efac..fedf2c1 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -17,9 +17,7 @@
  */
 package org.apache.cassandra.tools;
 
-import java.io.File;
-import java.io.FileWriter;
-import java.io.IOException;
+import java.io.*;
 import java.lang.management.MemoryUsage;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
@@ -60,8 +58,7 @@ import static com.google.common.collect.Lists.newArrayList;
 import static java.lang.Integer.parseInt;
 import static java.lang.String.format;
 import static org.apache.commons.lang3.ArrayUtils.EMPTY_STRING_ARRAY;
-import static org.apache.commons.lang3.StringUtils.EMPTY;
-import static org.apache.commons.lang3.StringUtils.join;
+import static org.apache.commons.lang3.StringUtils.*;
 
 public class NodeTool
 {
@@ -223,9 +220,20 @@ public class NodeTool
 @Option(type = OptionType.GLOBAL, name = {-pw, --password}, 
description = Remote jmx agent password)
 private String password = EMPTY;
 
+@Option(type = OptionType.GLOBAL, name = {-pwf, --password-file}, 
description = Path to the JMX password file)
+private String passwordFilePath = EMPTY;
+
 @Override
 public void run()
 {
+if (isNotEmpty(username)) {
+if (isNotEmpty(passwordFilePath))
+password = readUserPasswordFromFile(username, 
passwordFilePath);
+
+if (isEmpty(password))
+password = promptAndReadPassword();
+}
+
 try (NodeProbe probe = connect())
 {
 execute(probe);
@@ -236,6 +244,44 @@ public class NodeTool
 
 }
 
+private String readUserPasswordFromFile(String username, String 
passwordFilePath) {
+String password = EMPTY;
+
+File passwordFile = new File(passwordFilePath);
+try (Scanner scanner = new 
Scanner(passwordFile).useDelimiter(\\s+))
+{
+while (scanner.hasNextLine())
+{
+if (scanner.hasNext())
+{
+String jmxRole = scanner.next();
+if (jmxRole.equals(username)  scanner.hasNext())
+{
+password = scanner.next();
+break;
+}
+}
+scanner.nextLine();
+}
+} catch (FileNotFoundException e)
+{
+throw new RuntimeException(e);
+}
+
+return password;
+}
+
+private String promptAndReadPassword()
+{
+String password = EMPTY;
+
+Console console = System.console();
+if (console != null)
+password = String.valueOf(console.readPassword(Password:));
+
+return password;
+}
+
 protected abstract void execute(NodeProbe probe);
 
 private NodeProbe connect()



[jira] [Created] (CASSANDRA-6735) Exceptions during memtable flushes on shutdown hook prevent process shutdown

2014-02-19 Thread Sergio Bossa (JIRA)
Sergio Bossa created CASSANDRA-6735:
---

 Summary: Exceptions during memtable flushes on shutdown hook 
prevent process shutdown
 Key: CASSANDRA-6735
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6735
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Minor


If an exception occurs while flushing memtables during the shutdown hook, the 
process is left hanging due to non-daemon threads still running.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6735) Exceptions during memtable flushes on shutdown hook prevent process shutdown

2014-02-19 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-6735:


Attachment: CASSANDRA-6735.patch

 Exceptions during memtable flushes on shutdown hook prevent process shutdown
 

 Key: CASSANDRA-6735
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6735
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Minor
 Attachments: CASSANDRA-6735.patch


 If an exception occurs while flushing memtables during the shutdown hook, the 
 process is left hanging due to non-daemon threads still running.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6735) Exceptions during memtable flushes on shutdown hook prevent process shutdown

2014-02-19 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-6735:


Attachment: CASSANDRA-6735-1.2.patch

 Exceptions during memtable flushes on shutdown hook prevent process shutdown
 

 Key: CASSANDRA-6735
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6735
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Minor
 Attachments: CASSANDRA-6735-1.2.patch, CASSANDRA-6735.patch


 If an exception occurs while flushing memtables during the shutdown hook, the 
 process is left hanging due to non-daemon threads still running.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6735) Exceptions during memtable flushes on shutdown hook prevent process shutdown

2014-02-19 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-6735:


Reproduced In: 2.0.5, 1.2.15  (was: 2.0.5)

 Exceptions during memtable flushes on shutdown hook prevent process shutdown
 

 Key: CASSANDRA-6735
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6735
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Minor
 Attachments: CASSANDRA-6735-1.2.patch, CASSANDRA-6735.patch


 If an exception occurs while flushing memtables during the shutdown hook, the 
 process is left hanging due to non-daemon threads still running.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6736) Windows7 AccessDeniedException on commit log

2014-02-19 Thread Bill Mitchell (JIRA)
Bill Mitchell created CASSANDRA-6736:


 Summary: Windows7 AccessDeniedException on commit log 
 Key: CASSANDRA-6736
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6736
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, quad core, 8GB RAM, single Cassandra node, 
Cassandra 2.0.5 with leakdetect patch from CASSANDRA-6283
Reporter: Bill Mitchell
 Attachments: 2014-02-18-22-16.log

Similar to the data file deletion of CASSANDRA-6283, under heavy load with 
logged batches, I am seeing a problem where the Commit log cannot be deleted:
 ERROR [COMMIT-LOG-ALLOCATOR] 2014-02-18 22:15:58,252 CassandraDaemon.java 
(line 192) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
 FSWriteError in C:\Program Files\DataStax 
Community\data\commitlog\CommitLog-3-1392761510706.log
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:150)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$4.run(CommitLogAllocator.java:217)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.file.AccessDeniedException: C:\Program Files\DataStax 
Community\data\commitlog\CommitLog-3-1392761510706.log
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
at java.nio.file.Files.delete(Unknown Source)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
... 5 more
(Attached in 2014-02-18-22-16.log is a larger excerpt from the cassandra.log.)

In this particular case, I was trying to do 100 million inserts into two tables 
in parallel, one with a single wide row and one with narrow rows, and the error 
appeared after inserting 43,151,232 rows.  So it does take a while to trip over 
this timing issue.  

It may be aggravated by the size of the batches. This test was writing 10,000 
rows to each table in a batch.  

When I try switching the same test from using a logged batch to an unlogged 
batch, and no such failure appears. So the issue could be related to the use of 
large, logged batches, or it could be that unlogged batches just change the 
probability of failure.  






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6737) A batch statements on a single partition should create a new CF object for each update

2014-02-19 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-6737:
---

 Summary: A batch statements on a single partition should create a 
new CF object for each update
 Key: CASSANDRA-6737
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6


BatchStatement creates a new ColumnFamily object (as well as a new RowMutation 
object) for every update in the batch, even if all those update are actually on 
the same partition. This is particularly inefficient when bulkloading data into 
a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6736) Windows7 AccessDeniedException on commit log

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6736:
-

Assignee: Joshua McKenzie

 Windows7 AccessDeniedException on commit log 
 -

 Key: CASSANDRA-6736
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6736
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, quad core, 8GB RAM, single Cassandra node, 
 Cassandra 2.0.5 with leakdetect patch from CASSANDRA-6283
Reporter: Bill Mitchell
Assignee: Joshua McKenzie
 Attachments: 2014-02-18-22-16.log


 Similar to the data file deletion of CASSANDRA-6283, under heavy load with 
 logged batches, I am seeing a problem where the Commit log cannot be deleted:
  ERROR [COMMIT-LOG-ALLOCATOR] 2014-02-18 22:15:58,252 CassandraDaemon.java 
 (line 192) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
  FSWriteError in C:\Program Files\DataStax 
 Community\data\commitlog\CommitLog-3-1392761510706.log
   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:150)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$4.run(CommitLogAllocator.java:217)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.nio.file.AccessDeniedException: C:\Program Files\DataStax 
 Community\data\commitlog\CommitLog-3-1392761510706.log
   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
   at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
   at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
   at java.nio.file.Files.delete(Unknown Source)
   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
   ... 5 more
 (Attached in 2014-02-18-22-16.log is a larger excerpt from the cassandra.log.)
 In this particular case, I was trying to do 100 million inserts into two 
 tables in parallel, one with a single wide row and one with narrow rows, and 
 the error appeared after inserting 43,151,232 rows.  So it does take a while 
 to trip over this timing issue.  
 It may be aggravated by the size of the batches. This test was writing 10,000 
 rows to each table in a batch.  
 When I try switching the same test from using a logged batch to an unlogged 
 batch, and no such failure appears. So the issue could be related to the use 
 of large, logged batches, or it could be that unlogged batches just change 
 the probability of failure.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6737) A batch statements on a single partition should not create a new CF object for each update

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6737:
--

Summary: A batch statements on a single partition should not create a new 
CF object for each update  (was: A batch statements on a single partition 
should create a new CF object for each update)

 A batch statements on a single partition should not create a new CF object 
 for each update
 --

 Key: CASSANDRA-6737
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6


 BatchStatement creates a new ColumnFamily object (as well as a new 
 RowMutation object) for every update in the batch, even if all those update 
 are actually on the same partition. This is particularly inefficient when 
 bulkloading data into a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-2434) range movements can violate consistency

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2434:
--

Reviewer: Tyler Hobbs  (was: Nick Bailey)

([~thobbs] to review)

 range movements can violate consistency
 ---

 Key: CASSANDRA-2434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2434
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Schuller
Assignee: T Jake Luciani
 Fix For: 2.1

 Attachments: 2434-3.patch.txt, 2434-testery.patch.txt


 My reading (a while ago) of the code indicates that there is no logic 
 involved during bootstrapping that avoids consistency level violations. If I 
 recall correctly it just grabs neighbors that are currently up.
 There are at least two issues I have with this behavior:
 * If I have a cluster where I have applications relying on QUORUM with RF=3, 
 and bootstrapping complete based on only one node, I have just violated the 
 supposedly guaranteed consistency semantics of the cluster.
 * Nodes can flap up and down at any time, so even if a human takes care to 
 look at which nodes are up and things about it carefully before 
 bootstrapping, there's no guarantee.
 A complication is that not only does it depend on use-case where this is an 
 issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster 
 which is otherwise used for QUORUM operations you may wish to accept 
 less-than-quorum nodes during bootstrap in various emergency situations.
 A potential easy fix is to have bootstrap take an argument which is the 
 number of hosts to bootstrap from, or to assume QUORUM if none is given.
 (A related concern is bootstrapping across data centers. You may *want* to 
 bootstrap to a local node and then do a repair to avoid sending loads of data 
 across DC:s while still achieving consistency. Or even if you don't care 
 about the consistency issues, I don't think there is currently a way to 
 bootstrap from local nodes only.)
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6737) A batch statements on a single partition should not create a new CF object for each update

2014-02-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6737:


Attachment: 6737.txt

Attaching patch to make batch statements only create one CF and RowMutation 
object per partition. On a relatively simple benchmark inserting a 10k rows 
batch into a single partition (using the DataStax java driver, code here: 
https://gist.github.com/pcmanus/9098347, this isn't meant to be fancy) I get up 
to more than 20x improvement with this patch (on batch insertion drop from 1.2 
seconds to ~50-100ms).

Note that there is more optimization that we can be done for single partition 
batches through some special casing, but this is a very simple start.


 A batch statements on a single partition should not create a new CF object 
 for each update
 --

 Key: CASSANDRA-6737
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6

 Attachments: 6737.txt


 BatchStatement creates a new ColumnFamily object (as well as a new 
 RowMutation object) for every update in the batch, even if all those update 
 are actually on the same partition. This is particularly inefficient when 
 bulkloading data into a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6577) ConcurrentModificationException during nodetool netstats

2014-02-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905861#comment-13905861
 ] 

Joshua McKenzie commented on CASSANDRA-6577:


Regarding the netstats exception:  the SessionInfo objects being operated on 
inside the creation of CompositeData aren't thread-safe, so the following use 
presents an opportunity for the Exception you're seeing.

Declaration:
{code:title=SessionInfo.java|borderstyle=solid}
this.receivingFiles = new HashMap();
this.sendingFiles = new HashMap();
{code}

These are iterated across creating the various CompositeData types requested, 
however updateProgress within SessionInfo would cause invalidation of any open 
iterators on the collection if any are active.

Changing these to ConcurrentHashMaps should prevent the concurrent modification 
exceptions race though it's going to have a performance impact I'll need to 
think about some more.  We could go the NonBlockingHashMap route (CliffC route) 
but we don't have strong guarantees of consistency there.



 ConcurrentModificationException during nodetool netstats
 

 Key: CASSANDRA-6577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6577
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: AWS EC2 machines, 
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
 Each node has 32 vnodes.
Reporter: Shao-Chuan Wang
Assignee: Joshua McKenzie
Priority: Minor
  Labels: decommission, nodetool
 Fix For: 2.0.6


 The node is leaving and I wanted to check its netstats, but it raises 
 ConcurrentModificationException.
 {code}
 [ubuntu@ip-10-4-202-48 :~]# /mnt/cassandra_latest/bin/nodetool netstats
 Mode: LEAVING
 Exception in thread main java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
   at java.util.HashMap$ValueIterator.next(HashMap.java:954)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at com.google.common.collect.Lists.newArrayList(Lists.java:146)
   at com.google.common.collect.Lists.newArrayList(Lists.java:128)
   at 
 org.apache.cassandra.streaming.management.SessionInfoCompositeData.toArrayOfCompositeData(SessionInfoCompositeData.java:161)
   at 
 org.apache.cassandra.streaming.management.SessionInfoCompositeData.toCompositeData(SessionInfoCompositeData.java:98)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData$1.apply(StreamStateCompositeData.java:82)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData$1.apply(StreamStateCompositeData.java:79)
   at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at com.google.common.collect.Lists.newArrayList(Lists.java:146)
   at com.google.common.collect.Lists.newArrayList(Lists.java:128)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData.toCompositeData(StreamStateCompositeData.java:78)
   at 
 org.apache.cassandra.streaming.StreamManager$1.apply(StreamManager.java:87)
   at 
 org.apache.cassandra.streaming.StreamManager$1.apply(StreamManager.java:84)
   at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at com.google.common.collect.Sets.newHashSet(Sets.java:238)
   at com.google.common.collect.Sets.newHashSet(Sets.java:218)
   at 
 org.apache.cassandra.streaming.StreamManager.getCurrentStreams(StreamManager.java:83)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 

[jira] [Comment Edited] (CASSANDRA-6577) ConcurrentModificationException during nodetool netstats

2014-02-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905861#comment-13905861
 ] 

Joshua McKenzie edited comment on CASSANDRA-6577 at 2/19/14 6:56 PM:
-

Regarding the netstats exception:  the SessionInfo objects being operated on 
inside the creation of CompositeData aren't thread-safe, so the following use 
presents an opportunity for the Exception you're seeing.

Declaration:
{code:title=SessionInfo.java|borderstyle=solid}
this.receivingFiles = new HashMap();
this.sendingFiles = new HashMap();
{code}

These are iterated across creating the various CompositeData types requested, 
however updateProgress within SessionInfo would cause invalidation of any open 
iterators on the collection if any are active.

Changing these to ConcurrentHashMaps still won't prevent this race since it's 
an iterator invalidation problem - we may need to consider locking on this 
Collection externally on the SessionInfo level, or creating a copy of the 
internal data for representation on netstats.




was (Author: joshuamckenzie):
Regarding the netstats exception:  the SessionInfo objects being operated on 
inside the creation of CompositeData aren't thread-safe, so the following use 
presents an opportunity for the Exception you're seeing.

Declaration:
{code:title=SessionInfo.java|borderstyle=solid}
this.receivingFiles = new HashMap();
this.sendingFiles = new HashMap();
{code}

These are iterated across creating the various CompositeData types requested, 
however updateProgress within SessionInfo would cause invalidation of any open 
iterators on the collection if any are active.

Changing these to ConcurrentHashMaps should prevent the concurrent modification 
exceptions race though it's going to have a performance impact I'll need to 
think about some more.  We could go the NonBlockingHashMap route (CliffC route) 
but we don't have strong guarantees of consistency there.



 ConcurrentModificationException during nodetool netstats
 

 Key: CASSANDRA-6577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6577
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: AWS EC2 machines, 
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
 Each node has 32 vnodes.
Reporter: Shao-Chuan Wang
Assignee: Joshua McKenzie
Priority: Minor
  Labels: decommission, nodetool
 Fix For: 2.0.6


 The node is leaving and I wanted to check its netstats, but it raises 
 ConcurrentModificationException.
 {code}
 [ubuntu@ip-10-4-202-48 :~]# /mnt/cassandra_latest/bin/nodetool netstats
 Mode: LEAVING
 Exception in thread main java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
   at java.util.HashMap$ValueIterator.next(HashMap.java:954)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at com.google.common.collect.Lists.newArrayList(Lists.java:146)
   at com.google.common.collect.Lists.newArrayList(Lists.java:128)
   at 
 org.apache.cassandra.streaming.management.SessionInfoCompositeData.toArrayOfCompositeData(SessionInfoCompositeData.java:161)
   at 
 org.apache.cassandra.streaming.management.SessionInfoCompositeData.toCompositeData(SessionInfoCompositeData.java:98)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData$1.apply(StreamStateCompositeData.java:82)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData$1.apply(StreamStateCompositeData.java:79)
   at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at com.google.common.collect.Lists.newArrayList(Lists.java:146)
   at com.google.common.collect.Lists.newArrayList(Lists.java:128)
   at 
 org.apache.cassandra.streaming.management.StreamStateCompositeData.toCompositeData(StreamStateCompositeData.java:78)
   at 
 org.apache.cassandra.streaming.StreamManager$1.apply(StreamManager.java:87)
   at 
 org.apache.cassandra.streaming.StreamManager$1.apply(StreamManager.java:84)
   at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
   at 
 com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
   at com.google.common.collect.Iterators.addAll(Iterators.java:357)
   at 

[jira] [Commented] (CASSANDRA-6134) More efficient BatchlogManager

2014-02-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905891#comment-13905891
 ] 

Aleksey Yeschenko commented on CASSANDRA-6134:
--

For the record, several of those improvements have already made it into C*. If 
we don't do the partitioning, then only two are left to implement:

1. Don't do full scans, but limit the range to (nothing could be written 
earlier than that, batches not ready to replay yet) - the uuids are timeuuids 
there now, so it's a simple change, on my todo list
2.Replay several batches simultaneously, async - this is slightly more work, 
but only slightly

Stuff that made it recently, thanks to rbranson: CASSANDRA-6569, 
CASSANDRA-6550, CASSANDRA-6488, CASSANDRA-6481

Stuff that's still waiting (aside from 1. and 2.) : CASSANDRA-6551

 More efficient BatchlogManager
 --

 Key: CASSANDRA-6134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6134
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev
Assignee: Oleg Anastasyev
Priority: Minor
 Attachments: BatchlogManager.txt


 As we discussed earlier in CASSANDRA-6079 this is the new BatchManager.
 It stores batch records in 
 {code}
 CREATE TABLE batchlog (
   id_partition int,
   id timeuuid,
   data blob,
   PRIMARY KEY (id_partition, id)
 ) WITH COMPACT STORAGE AND
   CLUSTERING ORDER BY (id DESC)
 {code}
 where id_partition is minute-since-epoch of id uuid. 
 So when it scans for batches to replay ot scans within a single partition for 
  a slice of ids since last processed date till now minus write timeout.
 So no full batchlog CF scan and lot of randrom reads are made on normal 
 cycle. 
 Other improvements:
 1. It runs every 1/2 of write timeout and replays all batches written within 
 0.9 * write timeout from now. This way we ensure, that batched updates will 
 be replayed to th moment client times out from coordinator.
 2. It submits all mutations from single batch in parallel (Like StorageProxy 
 do). Old implementation played them one-by-one, so client can see half 
 applied batches in CF for a long time (depending on size of batch).
 3. It fixes a subtle racing bug with incorrect hint ttl calculation



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6685) Nodes never bootstrap if schema is empty

2014-02-19 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905898#comment-13905898
 ] 

Robert Coli commented on CASSANDRA-6685:


{quote}Luckily bootstrapping without schema isn't a huge problem or very 
common{quote}
In every cluster I have ever run, my initial setup process goes :

1) coalesce cluster
2) load schema

The above process seems very common to me; I am surprised to hear suggestion 
that anyone ever does anything else. A bug such as this one which would affect 
most noobs setting up My First Cassandra Cluster also seems very common to 
me, and a huge problem. 

In my view, this bug is serious enough to warrant an accelerated 1.2.16 release 
schedule.

 Nodes never bootstrap if schema is empty
 

 Key: CASSANDRA-6685
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6685
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Brandon Williams
 Fix For: 1.2.16


 Since 1.2.15, bootstrap never completes if the schema is empty. The 
 bootstrapping node endlessly prints:
 bq. {{INFO 12:37:44,863 JOINING: waiting for schema information to complete}}
 until you add something to the schema (i.e. create a keyspace).
 The problem looks to be caused by CASSANDRA-6648, where 
 MigrationManager.isReadForBootstrap() was changed to:
 bq. {{return Schema.instance.getVersion() != null  
 !Schema.emptyVersion.equals(Schema.instance.getVersion());}}
 This is wrong since 
 {{Schema.emptyVersion.equals(Schema.instance.getVersion())}} is always true 
 if there is no schema.
 We need some different logic for determining when the schema is propagated.
 I haven't tested, but I expect this issue appears in 2.0.5 too.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6685) Nodes never bootstrap if schema is empty

2014-02-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905915#comment-13905915
 ] 

Brandon Williams commented on CASSANDRA-6685:
-

Thanks for your input.

 Nodes never bootstrap if schema is empty
 

 Key: CASSANDRA-6685
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6685
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Brandon Williams
 Fix For: 1.2.16


 Since 1.2.15, bootstrap never completes if the schema is empty. The 
 bootstrapping node endlessly prints:
 bq. {{INFO 12:37:44,863 JOINING: waiting for schema information to complete}}
 until you add something to the schema (i.e. create a keyspace).
 The problem looks to be caused by CASSANDRA-6648, where 
 MigrationManager.isReadForBootstrap() was changed to:
 bq. {{return Schema.instance.getVersion() != null  
 !Schema.emptyVersion.equals(Schema.instance.getVersion());}}
 This is wrong since 
 {{Schema.emptyVersion.equals(Schema.instance.getVersion())}} is always true 
 if there is no schema.
 We need some different logic for determining when the schema is propagated.
 I haven't tested, but I expect this issue appears in 2.0.5 too.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Avoid NPEs when receiving table changes for an unknown keyspace

2014-02-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 c92b20b30 - 5e40a3b7c


Avoid NPEs when receiving table changes for an unknown keyspace

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5631


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e40a3b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e40a3b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e40a3b7

Branch: refs/heads/cassandra-1.2
Commit: 5e40a3b7c120f430d73ab34db68b361c0313b2eb
Parents: c92b20b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:20:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:22:04 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ffda82c..51dec14 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+ * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 584415d..9f6113c 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -210,7 +210,7 @@ public class MigrationManager
 throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
 
 logger.info(String.format(Create new ColumnFamily: %s, cfm));
-announce(cfm.toSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
 }
 
 public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
@@ -236,7 +236,7 @@ public class MigrationManager
 oldCfm.validateCompatility(cfm);
 
 logger.info(String.format(Update ColumnFamily '%s/%s' From %s To %s, 
cfm.ksName, cfm.cfName, oldCfm, cfm));
-announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros()));
+announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros()), cfm.ksName));
 }
 
 public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
@@ -256,7 +256,14 @@ public class MigrationManager
 throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
 
 logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
-announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+}
+
+// Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
+private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
+{
+migration.add(SystemTable.readSchemaRow(ksName).cf);
+return migration;
 }
 
 /**



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/MigrationManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2125165
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2125165
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2125165

Branch: refs/heads/cassandra-2.0
Commit: a212516548316c958b9ce034f39d13f1c7169357
Parents: 84103bb 5e40a3b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:25:28 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:25:28 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/CHANGES.txt
--
diff --cc CHANGES.txt
index 2cacbaa,51dec14..83b03de
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,34 -15,24 +31,35 @@@ Merged from 1.2
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
   * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+  * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index 7185813,9f6113c..0e36234
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -236,7 -236,7 +236,7 @@@ public class MigrationManage
  oldCfm.validateCompatility(cfm);
  
  logger.info(String.format(Update ColumnFamily '%s/%s' From %s To 
%s, cfm.ksName, cfm.cfName, oldCfm, cfm));
- announce(oldCfm.toSchemaUpdate(cfm, 

[1/2] git commit: Avoid NPEs when receiving table changes for an unknown keyspace

2014-02-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 84103bbe2 - a21251654


Avoid NPEs when receiving table changes for an unknown keyspace

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5631


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e40a3b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e40a3b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e40a3b7

Branch: refs/heads/cassandra-2.0
Commit: 5e40a3b7c120f430d73ab34db68b361c0313b2eb
Parents: c92b20b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:20:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:22:04 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ffda82c..51dec14 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+ * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 584415d..9f6113c 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -210,7 +210,7 @@ public class MigrationManager
 throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
 
 logger.info(String.format(Create new ColumnFamily: %s, cfm));
-announce(cfm.toSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
 }
 
 public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
@@ -236,7 +236,7 @@ public class MigrationManager
 oldCfm.validateCompatility(cfm);
 
 logger.info(String.format(Update ColumnFamily '%s/%s' From %s To %s, 
cfm.ksName, cfm.cfName, oldCfm, cfm));
-announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros()));
+announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros()), cfm.ksName));
 }
 
 public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
@@ -256,7 +256,14 @@ public class MigrationManager
 throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
 
 logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
-announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+}
+
+// Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
+private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
+{
+migration.add(SystemTable.readSchemaRow(ksName).cf);
+return migration;
 }
 
 /**



[1/3] git commit: Avoid NPEs when receiving table changes for an unknown keyspace

2014-02-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 657e16006 - ce7bc6a84


Avoid NPEs when receiving table changes for an unknown keyspace

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5631


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e40a3b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e40a3b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e40a3b7

Branch: refs/heads/cassandra-2.1
Commit: 5e40a3b7c120f430d73ab34db68b361c0313b2eb
Parents: c92b20b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:20:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:22:04 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ffda82c..51dec14 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+ * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 584415d..9f6113c 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -210,7 +210,7 @@ public class MigrationManager
 throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
 
 logger.info(String.format(Create new ColumnFamily: %s, cfm));
-announce(cfm.toSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
 }
 
 public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
@@ -236,7 +236,7 @@ public class MigrationManager
 oldCfm.validateCompatility(cfm);
 
 logger.info(String.format(Update ColumnFamily '%s/%s' From %s To %s, 
cfm.ksName, cfm.cfName, oldCfm, cfm));
-announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros()));
+announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros()), cfm.ksName));
 }
 
 public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
@@ -256,7 +256,14 @@ public class MigrationManager
 throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
 
 logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
-announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+}
+
+// Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
+private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
+{
+migration.add(SystemTable.readSchemaRow(ksName).cf);
+return migration;
 }
 
 /**



[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-02-19 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce7bc6a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce7bc6a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce7bc6a8

Branch: refs/heads/cassandra-2.1
Commit: ce7bc6a847a448815b243f0a92db0a36eee01100
Parents: 657e160 a212516
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:26:21 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:26:54 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bc6a8/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bc6a8/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index ee2d178,0e36234..3f535a0
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -208,14 -210,9 +208,14 @@@ public class MigrationManage
  throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
  
  logger.info(String.format(Create new ColumnFamily: %s, cfm));
- announce(cfm.toSchema(FBUtilities.timestampMicros()));
+ 
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
  }
  
 +public static void announceNewType(UserType newType)
 +{
 +announce(UTMetaData.toSchema(newType, FBUtilities.timestampMicros()));
 +}
 +
  public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
  {
  ksm.validate();
@@@ -239,14 -236,9 +239,14 @@@
  oldCfm.validateCompatility(cfm);
  
  logger.info(String.format(Update ColumnFamily '%s/%s' From %s To 
%s, cfm.ksName, cfm.cfName, oldCfm, cfm));
- announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros(), 
fromThrift));
+ announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros(), fromThrift), cfm.ksName));
  }
  
 +public static void announceTypeUpdate(UserType updatedType)
 +{
 +announceNewType(updatedType);
 +}
 +
  public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
  {
  KSMetaData oldKsm = Schema.instance.getKSMetaData(ksName);
@@@ -264,14 -256,16 +264,21 @@@
  throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
  
  logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
- announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+ 
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+ }
+ 
+ // Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
 -private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
++private static Mutation addSerializedKeyspace(Mutation migration, String 
ksName)
+ {
+ migration.add(SystemKeyspace.readSchemaRow(ksName).cf);
+ return migration;
  }
  
 +public static void announceTypeDrop(UserType droppedType)
 +{
 +announce(UTMetaData.dropFromSchema(droppedType, 
FBUtilities.timestampMicros()));
 +}
 +
  /**
   * actively announce a new version to active hosts via rpc
   * @param schema The schema mutation to be applied



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/MigrationManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2125165
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2125165
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2125165

Branch: refs/heads/cassandra-2.1
Commit: a212516548316c958b9ce034f39d13f1c7169357
Parents: 84103bb 5e40a3b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:25:28 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:25:28 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/CHANGES.txt
--
diff --cc CHANGES.txt
index 2cacbaa,51dec14..83b03de
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,34 -15,24 +31,35 @@@ Merged from 1.2
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
   * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+  * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index 7185813,9f6113c..0e36234
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -236,7 -236,7 +236,7 @@@ public class MigrationManage
  oldCfm.validateCompatility(cfm);
  
  logger.info(String.format(Update ColumnFamily '%s/%s' From %s To 
%s, cfm.ksName, cfm.cfName, oldCfm, cfm));
- announce(oldCfm.toSchemaUpdate(cfm, 

[3/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-02-19 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce7bc6a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce7bc6a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce7bc6a8

Branch: refs/heads/trunk
Commit: ce7bc6a847a448815b243f0a92db0a36eee01100
Parents: 657e160 a212516
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:26:21 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:26:54 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bc6a8/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bc6a8/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index ee2d178,0e36234..3f535a0
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -208,14 -210,9 +208,14 @@@ public class MigrationManage
  throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
  
  logger.info(String.format(Create new ColumnFamily: %s, cfm));
- announce(cfm.toSchema(FBUtilities.timestampMicros()));
+ 
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
  }
  
 +public static void announceNewType(UserType newType)
 +{
 +announce(UTMetaData.toSchema(newType, FBUtilities.timestampMicros()));
 +}
 +
  public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
  {
  ksm.validate();
@@@ -239,14 -236,9 +239,14 @@@
  oldCfm.validateCompatility(cfm);
  
  logger.info(String.format(Update ColumnFamily '%s/%s' From %s To 
%s, cfm.ksName, cfm.cfName, oldCfm, cfm));
- announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros(), 
fromThrift));
+ announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros(), fromThrift), cfm.ksName));
  }
  
 +public static void announceTypeUpdate(UserType updatedType)
 +{
 +announceNewType(updatedType);
 +}
 +
  public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
  {
  KSMetaData oldKsm = Schema.instance.getKSMetaData(ksName);
@@@ -264,14 -256,16 +264,21 @@@
  throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
  
  logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
- announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+ 
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+ }
+ 
+ // Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
 -private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
++private static Mutation addSerializedKeyspace(Mutation migration, String 
ksName)
+ {
+ migration.add(SystemKeyspace.readSchemaRow(ksName).cf);
+ return migration;
  }
  
 +public static void announceTypeDrop(UserType droppedType)
 +{
 +announce(UTMetaData.dropFromSchema(droppedType, 
FBUtilities.timestampMicros()));
 +}
 +
  /**
   * actively announce a new version to active hosts via rpc
   * @param schema The schema mutation to be applied



[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-19 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/MigrationManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2125165
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2125165
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2125165

Branch: refs/heads/trunk
Commit: a212516548316c958b9ce034f39d13f1c7169357
Parents: 84103bb 5e40a3b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:25:28 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:25:28 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/CHANGES.txt
--
diff --cc CHANGES.txt
index 2cacbaa,51dec14..83b03de
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,34 -15,24 +31,35 @@@ Merged from 1.2
   * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
   * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
   * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+  * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
  
  
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2125165/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index 7185813,9f6113c..0e36234
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -236,7 -236,7 +236,7 @@@ public class MigrationManage
  oldCfm.validateCompatility(cfm);
  
  logger.info(String.format(Update ColumnFamily '%s/%s' From %s To 
%s, cfm.ksName, cfm.cfName, oldCfm, cfm));
- announce(oldCfm.toSchemaUpdate(cfm, 

[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-02-19 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22ab0264
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22ab0264
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22ab0264

Branch: refs/heads/trunk
Commit: 22ab0264cc1a35fd8b20871c089f72f4453a758e
Parents: 420fd01 ce7bc6a
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:27:42 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:27:42 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--




[1/4] git commit: Avoid NPEs when receiving table changes for an unknown keyspace

2014-02-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 420fd0110 - 22ab0264c


Avoid NPEs when receiving table changes for an unknown keyspace

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5631


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e40a3b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e40a3b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e40a3b7

Branch: refs/heads/trunk
Commit: 5e40a3b7c120f430d73ab34db68b361c0313b2eb
Parents: c92b20b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 19 22:20:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 19 22:22:04 2014 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java | 13 ++---
 2 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ffda82c..51dec14 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -15,6 +15,7 @@
  * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
  * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
  * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
+ * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e40a3b7/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 584415d..9f6113c 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -210,7 +210,7 @@ public class MigrationManager
 throw new AlreadyExistsException(cfm.ksName, cfm.cfName);
 
 logger.info(String.format(Create new ColumnFamily: %s, cfm));
-announce(cfm.toSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(cfm.toSchema(FBUtilities.timestampMicros()), 
cfm.ksName));
 }
 
 public static void announceKeyspaceUpdate(KSMetaData ksm) throws 
ConfigurationException
@@ -236,7 +236,7 @@ public class MigrationManager
 oldCfm.validateCompatility(cfm);
 
 logger.info(String.format(Update ColumnFamily '%s/%s' From %s To %s, 
cfm.ksName, cfm.cfName, oldCfm, cfm));
-announce(oldCfm.toSchemaUpdate(cfm, FBUtilities.timestampMicros()));
+announce(addSerializedKeyspace(oldCfm.toSchemaUpdate(cfm, 
FBUtilities.timestampMicros()), cfm.ksName));
 }
 
 public static void announceKeyspaceDrop(String ksName) throws 
ConfigurationException
@@ -256,7 +256,14 @@ public class MigrationManager
 throw new ConfigurationException(String.format(Cannot drop non 
existing column family '%s' in keyspace '%s'., cfName, ksName));
 
 logger.info(String.format(Drop ColumnFamily '%s/%s', oldCfm.ksName, 
oldCfm.cfName));
-announce(oldCfm.dropFromSchema(FBUtilities.timestampMicros()));
+
announce(addSerializedKeyspace(oldCfm.dropFromSchema(FBUtilities.timestampMicros()),
 ksName));
+}
+
+// Include the serialized keyspace for when a target node missed the 
CREATE KEYSPACE migration (see #5631).
+private static RowMutation addSerializedKeyspace(RowMutation migration, 
String ksName)
+{
+migration.add(SystemTable.readSchemaRow(ksName).cf);
+return migration;
 }
 
 /**



[jira] [Updated] (CASSANDRA-6737) A batch statements on a single partition should not create a new CF object for each update

2014-02-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6737:
--

Reviewer: Benedict

[~benedict] to review

 A batch statements on a single partition should not create a new CF object 
 for each update
 --

 Key: CASSANDRA-6737
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6

 Attachments: 6737.txt


 BatchStatement creates a new ColumnFamily object (as well as a new 
 RowMutation object) for every update in the batch, even if all those update 
 are actually on the same partition. This is particularly inefficient when 
 bulkloading data into a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6737) A batch statements on a single partition should not create a new CF object for each update

2014-02-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905931#comment-13905931
 ] 

Jonathan Ellis commented on CASSANDRA-6737:
---

FTR I'm at the point where other things being equal I'd prefer to put 
optimizations in 2.1.

 A batch statements on a single partition should not create a new CF object 
 for each update
 --

 Key: CASSANDRA-6737
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6

 Attachments: 6737.txt


 BatchStatement creates a new ColumnFamily object (as well as a new 
 RowMutation object) for every update in the batch, even if all those update 
 are actually on the same partition. This is particularly inefficient when 
 bulkloading data into a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6561) Static columns in CQL3

2014-02-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905963#comment-13905963
 ] 

Aleksey Yeschenko commented on CASSANDRA-6561:
--

LGTM, +1

 Static columns in CQL3
 --

 Key: CASSANDRA-6561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6561
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6


 I'd like to suggest the following idea for adding static columns to CQL3.  
 I'll note that the basic idea has been suggested by jhalliday on irc but the 
 rest of the details are mine and I should be blamed for anything stupid in 
 what follows.
 Let me start with a rational: there is 2 main family of CF that have been 
 historically used in Thrift: static ones and dynamic ones. CQL3 handles both 
 family through the presence or not of clustering columns. There is however 
 some cases where mixing both behavior has its use. I like to think of those 
 use cases as 3 broad category:
 # to denormalize small amounts of not-entirely-static data in otherwise 
 static entities. It's say tags for a product or custom properties in a 
 user profile. This is why we've added CQL3 collections. Importantly, this is 
 the *only* use case for which collections are meant (which doesn't diminishes 
 their usefulness imo, and I wouldn't disagree that we've maybe not 
 communicated this too well).
 # to optimize fetching both a static entity and related dynamic ones. Say you 
 have blog posts, and each post has associated comments (chronologically 
 ordered). *And* say that a very common query is fetch a post and its 50 last 
 comments. In that case, it *might* be beneficial to store a blog post 
 (static entity) in the same underlying CF than it's comments for performance 
 reason.  So that fetch a post and it's 50 last comments is just one slice 
 internally.
 # you want to CAS rows of a dynamic partition based on some partition 
 condition. This is the same use case than why CASSANDRA-5633 exists for.
 As said above, 1) is already covered by collections, but 2) and 3) are not 
 (and
 I strongly believe collections are not the right fit, API wise, for those).
 Also, note that I don't want to underestimate the usefulness of 2). In most 
 cases, using a separate table for the blog posts and the comments is The 
 Right Solution, and trying to do 2) is premature optimisation. Yet, when used 
 properly, that kind of optimisation can make a difference, so I think having 
 a relatively native solution for it in CQL3 could make sense.
 Regarding 3), though CASSANDRA-5633 would provide one solution for it, I have 
 the feeling that static columns actually are a more natural approach (in term 
 of API). That's arguably more of a personal opinion/feeling though.
 So long story short, CQL3 lacks a way to mix both some static and dynamic 
 rows in the same partition of the same CQL3 table, and I think such a tool 
 could have it's use.
 The proposal is thus to allow static columns. Static columns would only 
 make sense in table with clustering columns (the dynamic ones). A static 
 column value would be static to the partition (all rows of the partition 
 would share the value for such column). The syntax would just be:
 {noformat}
 CREATE TABLE t (
   k text,
   s text static,
   i int,
   v text,
   PRIMARY KEY (k, i)
 )
 {noformat}
 then you'd get:
 {noformat}
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm shared,   0, foo);
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm still shared, 1, bar);
 SELECT * FROM t;
  k |  s | i |v
 
 k0 | I'm still shared | 0 | bar
 k0 | I'm still shared | 1 | foo
 {noformat}
 There would be a few semantic details to decide on regarding deletions, ttl, 
 etc. but let's see if we agree it's a good idea first before ironing those 
 out.
 One last point is the implementation. Though I do think this idea has merits, 
 it's definitively not useful enough to justify rewriting the storage engine 
 for it. But I think we can support this relatively easily (emphasis on 
 relatively :)), which is probably the main reason why I like the approach.
 Namely, internally, we can store static columns as cells whose clustering 
 column values are empty. So in terms of cells, the partition of my example 
 would look like:
 {noformat}
 k0 : [
   (:s - I'm still shared), // the static column
   (0: - )  // row marker
   (0:v - bar)
   (1: - )  // row marker
   (1:v - foo)
 ]
 {noformat}
 Of course, using empty values for the clustering columns doesn't quite work 
 because it could conflict with the user using empty clustering columns. But 
 in the CompositeType encoding we have the end-of-component byte that we could 
 reuse by 

[jira] [Commented] (CASSANDRA-6704) Create wide row scanners

2014-02-19 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905973#comment-13905973
 ] 

Tupshin Harper commented on CASSANDRA-6704:
---

I'm all in favor of this. I'd love to see a UDTF equivalent in Cassandra and 
CQL that could allow us to do a lot of deep mucking with server-side processing 
in a plug-able way. My suggestion and request would be that you practically and 
conceptually isolate that feature (a scanner/UDTF interface) from the other 
aspects of this ticket. With a sane interface, I expect there would be minimal 
objections. I know that this, by itself, doesn't meet all of your objectives, 
but it moves us in the right direction.

 Create wide row scanners
 

 Key: CASSANDRA-6704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6704
 Project: Cassandra
  Issue Type: New Feature
Reporter: Edward Capriolo
Assignee: Edward Capriolo

 The BigTable white paper demonstrates the use of scanners to iterate over 
 rows and columns. 
 http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf
 Because Cassandra does not have a primary sorting on row keys scanning over 
 ranges of row keys is less useful. 
 However we can use the scanner concept to operate on wide rows. For example 
 many times a user wishes to do some custom processing inside a row and does 
 not wish to carry the data across the network to do this processing. 
 I have already implemented thrift methods to compile dynamic groovy code into 
 Filters as well as some code that uses a Filter to page through and process 
 data on the server side.
 https://github.com/edwardcapriolo/cassandra/compare/apache:trunk...trunk
 The following is a working code snippet.
 {code}
 @Test
 public void test_scanner() throws Exception
 {
   ColumnParent cp = new ColumnParent();
   cp.setColumn_family(Standard1);
   ByteBuffer key = ByteBuffer.wrap(rscannerkey.getBytes());
   for (char a='a'; a  'g'; a++){
 Column c1 = new Column();
 c1.setName((a+).getBytes());
 c1.setValue(new byte [0]);
 c1.setTimestamp(System.nanoTime());
 server.insert(key, cp, c1, ConsistencyLevel.ONE);
   }
   
   FilterDesc d = new FilterDesc();
   d.setSpec(GROOVY_CLASS_LOADER);
   d.setName(limit3);
   d.setCode(import org.apache.cassandra.dht.* \n +
   import org.apache.cassandra.thrift.* \n +
   public class Limit3 implements SFilter { \n  +
   public FilterReturn filter(ColumnOrSuperColumn col, 
 ListColumnOrSuperColumn filtered) {\n+
filtered.add(col);\n+
return filtered.size() 3 ? FilterReturn.FILTER_MORE : 
 FilterReturn.FILTER_DONE;\n+
   } \n +
 }\n);
   server.create_filter(d);
   
   
   ScannerResult res = server.create_scanner(Standard1, limit3, key, 
 ByteBuffer.wrap(a.getBytes()));
   Assert.assertEquals(3, res.results.size());
 }
 {code}
 I am going to be working on this code over the next few weeks but I wanted to 
 get the concept our early so the design can see some criticism.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6735) Exceptions during memtable flushes on shutdown hook prevent process shutdown

2014-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6735:
-

 Reviewer: Aleksey Yeschenko
Reproduced In: 2.0.5, 1.2.15  (was: 1.2.15, 2.0.5)

 Exceptions during memtable flushes on shutdown hook prevent process shutdown
 

 Key: CASSANDRA-6735
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6735
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Minor
 Attachments: CASSANDRA-6735-1.2.patch, CASSANDRA-6735.patch


 If an exception occurs while flushing memtables during the shutdown hook, the 
 process is left hanging due to non-daemon threads still running.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6722) cross-partition ordering should have warning or be disallowed when paging

2014-02-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6722:
-

Reviewer: Aleksey Yeschenko

 cross-partition ordering should have warning or be disallowed when paging
 -

 Key: CASSANDRA-6722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6722
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.6

 Attachments: 6722.txt


 consider this schema/data/query:
 {noformat}
 CREATE TABLE paging_test (
 id int,
 value text,
 PRIMARY KEY (id, value)
 ) WITH CLUSTERING ORDER BY (value ASC)
 |id|value|
 |1 |a|
 |2 |b|
 |1 |c|
 |2 |d| 
 |1 |e| 
 |2 |f| 
 |1 |g| 
 |2 |h|
 |1 |i|
 |2 |j|
 select * from paging_test where id in (1,2) order by value asc;
 {noformat}
 When paging the above query I get the sorted results from id=1 first, then 
 the sorted results from id=2 after that. I was testing this because I was 
 curious if the paging system could somehow globally sort the results but it 
 makes sense that we can't do that, since that would require all results to be 
 collated up front.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >