[jira] [Resolved] (CASSANDRA-6189) Usability: Unable to start cassandra server

2013-10-13 Thread Deepak Kumar V (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Kumar V resolved CASSANDRA-6189.
---

Resolution: Not A Problem

Worked with 1.7
$ echo $JAVA_HOME
/usr/lib/jvm/java-7-oracle
$ java -version
java version 1.7.0_40
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
deepakkv@ubuntu:~/ebay/softwares/apache-cassandra-2.0.1$ 


 Usability: Unable to start cassandra server 
 

 Key: CASSANDRA-6189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6189
 Project: Cassandra
  Issue Type: Bug
 Environment: apache-cassandra-2.0.1
 java -version
 java version 1.6.0_45
 Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
 Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)
 Linux ubuntu 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 
 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Deepak Kumar V

 I was not able to run Cassandra server on environment described above.
 Exception:
  bin/cassandra -f 
 xss =  -ea -javaagent:bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2932M -Xmx2932M -Xmn600M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 Exception in thread main java.lang.UnsupportedClassVersionError: 
 org/apache/cassandra/service/CassandraDaemon : Unsupported major.minor 
 version 51.0
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
 Could not find the main class: org.apache.cassandra.service.CassandraDaemon.  
 Program will exit.
 I was not able to find any solution for this in user mail archives.  
 User documentation(Getting started) can be updated (FAQ) with solution of 
 above common error.
 If this is not the right place, please re-direct. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-4366) add UseCondCardMark XX jvm settings on jdk 1.7

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4366.
---

Resolution: Fixed

This issue was to add the option over a year ago; if the option needs to be 
removed, please open a new ticket.

That said, I couldn't find anything in the change log for removing 
UseCondCardMark.

 add UseCondCardMark XX jvm settings on jdk 1.7
 --

 Key: CASSANDRA-4366
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4366
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Affects Versions: 1.2.0 beta 1
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.2.2

 Attachments: 4366.txt


 found by jbellis
 adding jvm extension setting UseCondCardMark as defined here
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7029167
 for better lock handling especially on hotspot with multicore processors.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[1/3] git commit: NEWS

2013-10-13 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 61dc59086 - 67bbdba82
  refs/heads/trunk 24e9d6310 - ca7ba1418


NEWS


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67bbdba8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67bbdba8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67bbdba8

Branch: refs/heads/cassandra-2.0
Commit: 67bbdba82267e759773fbec232ff97e2161a892b
Parents: 61dc590
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 05:47:29 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 05:47:29 2013 -0500

--
 NEWS.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67bbdba8/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 817b170..da52382 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -20,7 +20,7 @@ using the provided 'sstableupgrade' tool.
 New features
 
 - Speculative retry defaults to 99th percentile
-  (See blog post at TODO)
+  (See blog post at 
http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2)
 - Configurable metrics reporting
   (see conf/metrics-reporter-config-sample.yaml)
 - Compaction history and stats are now saved to system keyspace
@@ -43,6 +43,7 @@ Upgrading
 
 Upgrading
 -
+- Java 7 is now *required*!
 - Upgrading is ONLY supported from Cassandra 1.2.9 or later. This
   goes for sstable compatibility as well as network.  When
   upgrading from an earlier release, upgrade to 1.2.9 first and



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-13 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ca7ba141
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ca7ba141
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ca7ba141

Branch: refs/heads/trunk
Commit: ca7ba14188f2132e6bf34551e4dca6561049989f
Parents: 24e9d63 67bbdba
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 05:47:38 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 05:47:38 2013 -0500

--
 NEWS.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7ba141/NEWS.txt
--



[2/3] git commit: NEWS

2013-10-13 Thread jbellis
NEWS


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67bbdba8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67bbdba8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67bbdba8

Branch: refs/heads/trunk
Commit: 67bbdba82267e759773fbec232ff97e2161a892b
Parents: 61dc590
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 05:47:29 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 05:47:29 2013 -0500

--
 NEWS.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67bbdba8/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 817b170..da52382 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -20,7 +20,7 @@ using the provided 'sstableupgrade' tool.
 New features
 
 - Speculative retry defaults to 99th percentile
-  (See blog post at TODO)
+  (See blog post at 
http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2)
 - Configurable metrics reporting
   (see conf/metrics-reporter-config-sample.yaml)
 - Compaction history and stats are now saved to system keyspace
@@ -43,6 +43,7 @@ Upgrading
 
 Upgrading
 -
+- Java 7 is now *required*!
 - Upgrading is ONLY supported from Cassandra 1.2.9 or later. This
   goes for sstable compatibility as well as network.  When
   upgrading from an earlier release, upgrade to 1.2.9 first and



[jira] [Updated] (CASSANDRA-5818) Duplicated error messages on directory creation error at startup

2013-10-13 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-5818:
--

Attachment: patch.diff

 Duplicated error messages on directory creation error at startup
 

 Key: CASSANDRA-5818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5818
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: koray sariteke
Priority: Trivial
 Fix For: 2.1

 Attachments: patch.diff, trunk-5818.patch


 When I start Cassandra without the appropriate OS access rights to the 
 default Cassandra directories, I get a flood of {{ERROR}} messages at 
 startup, whereas one per directory would be more appropriate. See bellow:
 {code}
 ERROR 13:37:39,792 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,797 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,799 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,800 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,804 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,806 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,807 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,808 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,813 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,815 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,816 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,820 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,823 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,825 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies 

[jira] [Updated] (CASSANDRA-5818) Duplicated error messages on directory creation error at startup

2013-10-13 Thread koray sariteke (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

koray sariteke updated CASSANDRA-5818:
--

Attachment: (was: patch.diff)

 Duplicated error messages on directory creation error at startup
 

 Key: CASSANDRA-5818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5818
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: koray sariteke
Priority: Trivial
 Fix For: 2.1

 Attachments: patch.diff, trunk-5818.patch


 When I start Cassandra without the appropriate OS access rights to the 
 default Cassandra directories, I get a flood of {{ERROR}} messages at 
 startup, whereas one per directory would be more appropriate. See bellow:
 {code}
 ERROR 13:37:39,792 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,797 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,799 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,800 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,804 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,806 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,807 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,808 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,813 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,815 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,816 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,820 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,823 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,825 Failed to create 
 

[jira] [Commented] (CASSANDRA-5818) Duplicated error messages on directory creation error at startup

2013-10-13 Thread koray sariteke (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793659#comment-13793659
 ] 

koray sariteke commented on CASSANDRA-5818:
---

attached a new path under patch.diff, am I at wrong way?

 Duplicated error messages on directory creation error at startup
 

 Key: CASSANDRA-5818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5818
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: koray sariteke
Priority: Trivial
 Fix For: 2.1

 Attachments: patch.diff, trunk-5818.patch


 When I start Cassandra without the appropriate OS access rights to the 
 default Cassandra directories, I get a flood of {{ERROR}} messages at 
 startup, whereas one per directory would be more appropriate. See bellow:
 {code}
 ERROR 13:37:39,792 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,797 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,799 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,800 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,804 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,806 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,807 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,808 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,813 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,815 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,816 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,820 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,823 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 

[jira] [Assigned] (CASSANDRA-6177) remove all sleeps in the dtests

2013-10-13 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-6177:
---

Assignee: Daniel Meyer  (was: Ryan McGuire)

 remove all sleeps in the dtests
 ---

 Key: CASSANDRA-6177
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6177
 Project: Cassandra
  Issue Type: Test
Reporter: Brandon Williams
Assignee: Daniel Meyer

 The dtests use a ton of sleep calls for various things, most of which is 
 guessing if Cassandra has finished doing something or not.  Guessing is 
 problematic and shouldn't be necessary -- a prime example of this is creating 
 a ks or cf.  When done over cql, we sleep and hope it's done propagating, but 
 when done over thrift we actually check for schema agreement.  We should be 
 able to eliminate the sleeps and reliably detect when it's time for the next 
 step programmatically.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[3/9] git commit: make it clear we are logging two stats for MS

2013-10-13 Thread jbellis
make it clear we are logging two stats for MS


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc2dd525
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc2dd525
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc2dd525

Branch: refs/heads/trunk
Commit: dc2dd525eb69c001d703b6bb2bd673a0d34db45a
Parents: 5fab127
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 19:54:31 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:07:23 2013 -0500

--
 src/java/org/apache/cassandra/utils/StatusLogger.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc2dd525/src/java/org/apache/cassandra/utils/StatusLogger.java
--
diff --git a/src/java/org/apache/cassandra/utils/StatusLogger.java 
b/src/java/org/apache/cassandra/utils/StatusLogger.java
index 15c4811..dbf56d4 100644
--- a/src/java/org/apache/cassandra/utils/StatusLogger.java
+++ b/src/java/org/apache/cassandra/utils/StatusLogger.java
@@ -93,7 +93,7 @@ public class StatusLogger
 pendingResponses += n;
 }
 logger.info(String.format(%-25s%10s%10s,
-  MessagingService, n/a, pendingCommands + 
, + pendingResponses));
+  MessagingService, n/a, pendingCommands + 
/ + pendingResponses));
 
 // Global key/row cache information
 AutoSavingCacheKeyCacheKey, RowIndexEntry keyCache = 
CacheService.instance.keyCache;



[9/9] git commit: merge from 2.0

2013-10-13 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2531424c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2531424c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2531424c

Branch: refs/heads/trunk
Commit: 2531424c0050196512b3f8f785f9041eab1c2ef3
Parents: ca7ba14 53b2d9d
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 11:13:55 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:13:55 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 .../org/apache/cassandra/utils/StatusLogger.java |  2 +-
 5 files changed, 12 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2531424c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2531424c/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2531424c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2531424c/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--



[6/9] git commit: Avoid using row cache on 2i CFs patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732

2013-10-13 Thread jbellis
Avoid using row cache on 2i CFs
patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7290abd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7290abd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7290abd1

Branch: refs/heads/trunk
Commit: 7290abd198d6c63f1109094cf237d9c84f617f7d
Parents: dc2dd52
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 14:45:46 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:09:25 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 4 files changed, 11 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index afb1464..7f43031 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Handle JMX notification failure for repair (CASSANDRA-6097)
  * (Hadoop) Fetch no more than 128 splits in parallel (CASSANDRA-6169)
  * stress: add username/password authentication support (CASSANDRA-6068)
+ * Fix indexed queries with row cache enabled on parent table (CASSANDRA-5732)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 4355737..fcbd012 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -309,7 +309,13 @@ public final class CFMetaData
 
 CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc, UUID id)
 {
-// Final fields must be set in constructor
+assert keyspace != null;
+assert name != null;
+assert type != null;
+assert id != null;
+// (subcc may be null for non-supercolumns)
+// (comp may also be null for custom indexes, which is kind of broken 
if you ask me)
+
 ksName = keyspace;
 cfName = name;
 cfType = type;
@@ -387,7 +393,7 @@ public final class CFMetaData
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, AbstractType? columnComparator)
 {
 // Depends on parent's cache setting, turn on its index CF's cache.
-// Here, only key cache is enabled, but later (in KeysIndex) row cache 
will be turned on depending on cardinality.
+// Row caching is never enabled; see CASSANDRA-5732
 Caching indexCaching = parent.getCaching() == Caching.ALL || 
parent.getCaching() == Caching.KEYS_ONLY
  ? Caching.KEYS_ONLY
  : Caching.NONE;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a7e8605..39359b7 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1190,12 +1190,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 if (isRowCacheEnabled())
 {
-UUID cfId = Schema.instance.getId(table.name, columnFamily);
-if (cfId == null)
-{
-logger.trace(no id found for {}.{}, table.name, 
columnFamily);
-return null;
-}
+assert !isIndex(); // CASSANDRA-5732
+UUID cfId = metadata.cfId;
 
 ColumnFamily cached = getThroughCache(cfId, filter);
 if (cached == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 2ff2d27..caa7e20 100644
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 

[1/9] git commit: make it clear we are logging two stats for MS

2013-10-13 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 5fab1276c - 7290abd19
  refs/heads/cassandra-2.0 67bbdba82 - 53b2d9d55
  refs/heads/trunk ca7ba1418 - 2531424c0


make it clear we are logging two stats for MS


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc2dd525
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc2dd525
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc2dd525

Branch: refs/heads/cassandra-1.2
Commit: dc2dd525eb69c001d703b6bb2bd673a0d34db45a
Parents: 5fab127
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 19:54:31 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:07:23 2013 -0500

--
 src/java/org/apache/cassandra/utils/StatusLogger.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc2dd525/src/java/org/apache/cassandra/utils/StatusLogger.java
--
diff --git a/src/java/org/apache/cassandra/utils/StatusLogger.java 
b/src/java/org/apache/cassandra/utils/StatusLogger.java
index 15c4811..dbf56d4 100644
--- a/src/java/org/apache/cassandra/utils/StatusLogger.java
+++ b/src/java/org/apache/cassandra/utils/StatusLogger.java
@@ -93,7 +93,7 @@ public class StatusLogger
 pendingResponses += n;
 }
 logger.info(String.format(%-25s%10s%10s,
-  MessagingService, n/a, pendingCommands + 
, + pendingResponses));
+  MessagingService, n/a, pendingCommands + 
/ + pendingResponses));
 
 // Global key/row cache information
 AutoSavingCacheKeyCacheKey, RowIndexEntry keyCache = 
CacheService.instance.keyCache;



[5/9] git commit: Avoid using row cache on 2i CFs patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732

2013-10-13 Thread jbellis
Avoid using row cache on 2i CFs
patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7290abd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7290abd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7290abd1

Branch: refs/heads/cassandra-2.0
Commit: 7290abd198d6c63f1109094cf237d9c84f617f7d
Parents: dc2dd52
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 14:45:46 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:09:25 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 4 files changed, 11 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index afb1464..7f43031 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Handle JMX notification failure for repair (CASSANDRA-6097)
  * (Hadoop) Fetch no more than 128 splits in parallel (CASSANDRA-6169)
  * stress: add username/password authentication support (CASSANDRA-6068)
+ * Fix indexed queries with row cache enabled on parent table (CASSANDRA-5732)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 4355737..fcbd012 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -309,7 +309,13 @@ public final class CFMetaData
 
 CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc, UUID id)
 {
-// Final fields must be set in constructor
+assert keyspace != null;
+assert name != null;
+assert type != null;
+assert id != null;
+// (subcc may be null for non-supercolumns)
+// (comp may also be null for custom indexes, which is kind of broken 
if you ask me)
+
 ksName = keyspace;
 cfName = name;
 cfType = type;
@@ -387,7 +393,7 @@ public final class CFMetaData
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, AbstractType? columnComparator)
 {
 // Depends on parent's cache setting, turn on its index CF's cache.
-// Here, only key cache is enabled, but later (in KeysIndex) row cache 
will be turned on depending on cardinality.
+// Row caching is never enabled; see CASSANDRA-5732
 Caching indexCaching = parent.getCaching() == Caching.ALL || 
parent.getCaching() == Caching.KEYS_ONLY
  ? Caching.KEYS_ONLY
  : Caching.NONE;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a7e8605..39359b7 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1190,12 +1190,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 if (isRowCacheEnabled())
 {
-UUID cfId = Schema.instance.getId(table.name, columnFamily);
-if (cfId == null)
-{
-logger.trace(no id found for {}.{}, table.name, 
columnFamily);
-return null;
-}
+assert !isIndex(); // CASSANDRA-5732
+UUID cfId = metadata.cfId;
 
 ColumnFamily cached = getThroughCache(cfId, filter);
 if (cached == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 2ff2d27..caa7e20 100644
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 

[4/9] git commit: Avoid using row cache on 2i CFs patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732

2013-10-13 Thread jbellis
Avoid using row cache on 2i CFs
patch by Sam Tunnicliffe and jbellis for CASSANDRA-5732


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7290abd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7290abd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7290abd1

Branch: refs/heads/cassandra-1.2
Commit: 7290abd198d6c63f1109094cf237d9c84f617f7d
Parents: dc2dd52
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 14:45:46 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:09:25 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 4 files changed, 11 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index afb1464..7f43031 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * Handle JMX notification failure for repair (CASSANDRA-6097)
  * (Hadoop) Fetch no more than 128 splits in parallel (CASSANDRA-6169)
  * stress: add username/password authentication support (CASSANDRA-6068)
+ * Fix indexed queries with row cache enabled on parent table (CASSANDRA-5732)
 
 
 1.2.10

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 4355737..fcbd012 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -309,7 +309,13 @@ public final class CFMetaData
 
 CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc, UUID id)
 {
-// Final fields must be set in constructor
+assert keyspace != null;
+assert name != null;
+assert type != null;
+assert id != null;
+// (subcc may be null for non-supercolumns)
+// (comp may also be null for custom indexes, which is kind of broken 
if you ask me)
+
 ksName = keyspace;
 cfName = name;
 cfType = type;
@@ -387,7 +393,7 @@ public final class CFMetaData
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, AbstractType? columnComparator)
 {
 // Depends on parent's cache setting, turn on its index CF's cache.
-// Here, only key cache is enabled, but later (in KeysIndex) row cache 
will be turned on depending on cardinality.
+// Row caching is never enabled; see CASSANDRA-5732
 Caching indexCaching = parent.getCaching() == Caching.ALL || 
parent.getCaching() == Caching.KEYS_ONLY
  ? Caching.KEYS_ONLY
  : Caching.NONE;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a7e8605..39359b7 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1190,12 +1190,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 if (isRowCacheEnabled())
 {
-UUID cfId = Schema.instance.getId(table.name, columnFamily);
-if (cfId == null)
-{
-logger.trace(no id found for {}.{}, table.name, 
columnFamily);
-return null;
-}
+assert !isIndex(); // CASSANDRA-5732
+UUID cfId = metadata.cfId;
 
 ColumnFamily cached = getThroughCache(cfId, filter);
 if (cached == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7290abd1/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 2ff2d27..caa7e20 100644
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 

[2/9] git commit: make it clear we are logging two stats for MS

2013-10-13 Thread jbellis
make it clear we are logging two stats for MS


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc2dd525
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc2dd525
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc2dd525

Branch: refs/heads/cassandra-2.0
Commit: dc2dd525eb69c001d703b6bb2bd673a0d34db45a
Parents: 5fab127
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Oct 10 19:54:31 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:07:23 2013 -0500

--
 src/java/org/apache/cassandra/utils/StatusLogger.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc2dd525/src/java/org/apache/cassandra/utils/StatusLogger.java
--
diff --git a/src/java/org/apache/cassandra/utils/StatusLogger.java 
b/src/java/org/apache/cassandra/utils/StatusLogger.java
index 15c4811..dbf56d4 100644
--- a/src/java/org/apache/cassandra/utils/StatusLogger.java
+++ b/src/java/org/apache/cassandra/utils/StatusLogger.java
@@ -93,7 +93,7 @@ public class StatusLogger
 pendingResponses += n;
 }
 logger.info(String.format(%-25s%10s%10s,
-  MessagingService, n/a, pendingCommands + 
, + pendingResponses));
+  MessagingService, n/a, pendingCommands + 
/ + pendingResponses));
 
 // Global key/row cache information
 AutoSavingCacheKeyCacheKey, RowIndexEntry keyCache = 
CacheService.instance.keyCache;



[8/9] git commit: merge from 1.2

2013-10-13 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53b2d9d5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53b2d9d5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53b2d9d5

Branch: refs/heads/trunk
Commit: 53b2d9d55acf841f74812efbc35af42990e01718
Parents: 67bbdba 7290abd
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 11:13:15 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:13:15 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 .../org/apache/cassandra/utils/StatusLogger.java |  2 +-
 5 files changed, 12 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b2d9d5/CHANGES.txt
--
diff --cc CHANGES.txt
index 0fc2a91,7f43031..4668ae2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -47,43 -22,10 +47,44 @@@ Merged from 1.2
   * Handle JMX notification failure for repair (CASSANDRA-6097)
   * (Hadoop) Fetch no more than 128 splits in parallel (CASSANDRA-6169)
   * stress: add username/password authentication support (CASSANDRA-6068)
+  * Fix indexed queries with row cache enabled on parent table (CASSANDRA-5732)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b2d9d5/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index bbea21e,fcbd012..374f4a5
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -445,18 -304,18 +445,24 @@@ public final class CFMetaDat
  
  public CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc)
  {
 -this(keyspace, name, type, comp, subcc, getId(keyspace, name));
 +this(keyspace, name, type, makeComparator(type, comp, subcc));
  }
  
 -CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc, UUID id)
 +public CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp)
 +{
 +this(keyspace, name, type, comp, getId(keyspace, name));
 +}
 +
 +@VisibleForTesting
 +CFMetaData(String keyspace, String 

[7/9] git commit: merge from 1.2

2013-10-13 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53b2d9d5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53b2d9d5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53b2d9d5

Branch: refs/heads/cassandra-2.0
Commit: 53b2d9d55acf841f74812efbc35af42990e01718
Parents: 67bbdba 7290abd
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Oct 13 11:13:15 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Oct 13 11:13:15 2013 -0500

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/config/CFMetaData.java  | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java   |  8 ++--
 .../AbstractSimplePerColumnSecondaryIndex.java   | 19 ---
 .../org/apache/cassandra/utils/StatusLogger.java |  2 +-
 5 files changed, 12 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b2d9d5/CHANGES.txt
--
diff --cc CHANGES.txt
index 0fc2a91,7f43031..4668ae2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -47,43 -22,10 +47,44 @@@ Merged from 1.2
   * Handle JMX notification failure for repair (CASSANDRA-6097)
   * (Hadoop) Fetch no more than 128 splits in parallel (CASSANDRA-6169)
   * stress: add username/password authentication support (CASSANDRA-6068)
+  * Fix indexed queries with row cache enabled on parent table (CASSANDRA-5732)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b2d9d5/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index bbea21e,fcbd012..374f4a5
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -445,18 -304,18 +445,24 @@@ public final class CFMetaDat
  
  public CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc)
  {
 -this(keyspace, name, type, comp, subcc, getId(keyspace, name));
 +this(keyspace, name, type, makeComparator(type, comp, subcc));
  }
  
 -CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp, AbstractType? subcc, UUID id)
 +public CFMetaData(String keyspace, String name, ColumnFamilyType type, 
AbstractType? comp)
 +{
 +this(keyspace, name, type, comp, getId(keyspace, name));
 +}
 +
 +@VisibleForTesting
 +CFMetaData(String keyspace, 

[jira] [Updated] (CASSANDRA-6092) Leveled Compaction after ALTER TABLE creates pending but does not actually begin

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6092:
--

Attachment: 6092.txt

Patch to kick off compactions after changing strategies.

 Leveled Compaction after ALTER TABLE creates pending but does not actually 
 begin
 

 Key: CASSANDRA-6092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.10
 Oracle Java 1.7.0_u40
 RHEL6.4
Reporter: Karl Mueller
Assignee: Daniel Meyer
 Attachments: 6092.txt


 Running Cassandra 1.2.10.  N=5, RF=3
 On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
 a single, large sstable.
 There's no activity on the table at the time of the ALTER command. I changed 
 it to Leveled Compaction with the command below.
 cqlsh:ProductGenomeDev alter table Node with compaction = { 'class' : 
 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
 Log entries confirm the change happened.
 [...]column_metadata={},compactionStrategyClass=class 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
  [...]
 nodetool compactionstats shows pending compactions, but there's no activity:
 pending tasks: 750
 12 hours later, nothing has still happened, same number pending. The 
 expectation would be that compactions would proceed immediately to convert 
 everything to Leveled Compaction as soon as the ALTER TABLE command goes.
 I try a simple write into the CF, and then flush the nodes. This kicks off 
 compaction on 3 nodes. (RF=3)
 cqlsh:ProductGenomeDev insert into Node (key, column1, value) values 
 ('test123', 'test123', 'test123');
 cqlsh:ProductGenomeDev select * from Node where key = 'test123';
  key | column1 | value
 -+-+-
  test123 | test123 | test123
 cqlsh:ProductGenomeDev delete from Node where key = 'test123';
 After a flush on every node, now I see:
 [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
 *** dev-cass00 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass04 (0) ***
 pending tasks: 752
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  341881
 643290447928 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass01 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass02 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3374975141
 642764512481 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass03 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3591320948
 643017643573 bytes 0.56%
 Active compaction remaining time :n/a
 After inserting and deleting more columns, enough that all nodes have new 
 data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6092) Leveled Compaction after ALTER TABLE creates pending but does not actually begin

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6092:
-

Assignee: Jonathan Ellis  (was: Daniel Meyer)

 Leveled Compaction after ALTER TABLE creates pending but does not actually 
 begin
 

 Key: CASSANDRA-6092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.10
 Oracle Java 1.7.0_u40
 RHEL6.4
Reporter: Karl Mueller
Assignee: Jonathan Ellis
 Attachments: 6092.txt


 Running Cassandra 1.2.10.  N=5, RF=3
 On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
 a single, large sstable.
 There's no activity on the table at the time of the ALTER command. I changed 
 it to Leveled Compaction with the command below.
 cqlsh:ProductGenomeDev alter table Node with compaction = { 'class' : 
 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
 Log entries confirm the change happened.
 [...]column_metadata={},compactionStrategyClass=class 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
  [...]
 nodetool compactionstats shows pending compactions, but there's no activity:
 pending tasks: 750
 12 hours later, nothing has still happened, same number pending. The 
 expectation would be that compactions would proceed immediately to convert 
 everything to Leveled Compaction as soon as the ALTER TABLE command goes.
 I try a simple write into the CF, and then flush the nodes. This kicks off 
 compaction on 3 nodes. (RF=3)
 cqlsh:ProductGenomeDev insert into Node (key, column1, value) values 
 ('test123', 'test123', 'test123');
 cqlsh:ProductGenomeDev select * from Node where key = 'test123';
  key | column1 | value
 -+-+-
  test123 | test123 | test123
 cqlsh:ProductGenomeDev delete from Node where key = 'test123';
 After a flush on every node, now I see:
 [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
 *** dev-cass00 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass04 (0) ***
 pending tasks: 752
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  341881
 643290447928 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass01 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass02 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3374975141
 642764512481 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass03 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3591320948
 643017643573 bytes 0.56%
 Active compaction remaining time :n/a
 After inserting and deleting more columns, enough that all nodes have new 
 data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6097) nodetool repair randomly hangs.

2013-10-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793704#comment-13793704
 ] 

Yuki Morishita commented on CASSANDRA-6097:
---

I think the issue they are having is platform specific.
Try setting sun.net.client.defaultReadTimeout system property 
(-Dsun.net.client.defaultReadTimeout=timeout in millisec) as suggested by 
Mikhail above to avoid stuck on socket read.
(http://docs.oracle.com/javase/6/docs/technotes/guides/net/properties.html)

 nodetool repair randomly hangs.
 ---

 Key: CASSANDRA-6097
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6097
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax AMI
Reporter: J.B. Langston
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6097-1.2.txt, dse.stack, nodetool.stack


 nodetool repair randomly hangs. This is not the same issue where repair hangs 
 if a stream is disrupted. This can be reproduced on a single-node cluster 
 where no streaming takes place, so I think this may be a JMX connection or 
 timeout issue. Thread dumps show that nodetool is waiting on a JMX response 
 and there are no repair-related threads running in Cassandra. Nodetool main 
 thread waiting for JMX response:
 {code}
 main prio=5 tid=7ffa4b001800 nid=0x10aedf000 in Object.wait() [10aede000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.tools.RepairRunner.repairAndWait(NodeProbe.java:976)
   at 
 org.apache.cassandra.tools.NodeProbe.forceRepairAsync(NodeProbe.java:221)
   at 
 org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:1444)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1213)
 {code}
 When nodetool hangs, it does not print out the following message:
 Starting repair command #XX, repairing 1 ranges for keyspace XXX
 However, Cassandra logs that repair in system.log:
 1380033480.95  INFO [Thread-154] 10:38:00,882 Starting repair command #X, 
 repairing X ranges for keyspace XXX
 This suggests that the repair command was received by Cassandra but the 
 connection then failed and nodetool didn't receive a response.
 Obviously, running repair on a single-node cluster is pointless but it's the 
 easiest way to demonstrate this problem. The customer who reported this has 
 also seen the issue on his real multi-node cluster.
 Steps to reproduce:
 Note: I reproduced this once on the official DataStax AMI with DSE 3.1.3 
 (Cassandra 1.2.6+patches).  I was unable to reproduce on my Mac using the 
 same version, and subsequent attempts to reproduce it on the AMI were 
 unsuccessful. The customer says he is able is able to reliably reproduce on 
 his Mac using DSE 3.1.3 and occasionally reproduce it on his real cluster. 
 1) Deploy an AMI using the DataStax AMI at 
 https://aws.amazon.com/amis/datastax-auto-clustering-ami-2-2
 2) Create a test keyspace
 {code}
 create keyspace test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 {code}
 3) Run an endless loop that runs nodetool repair repeatedly:
 {code}
 while true; do nodetool repair -pr test; done
 {code}
 4) Wait until repair hangs. It may take many tries; the behavior is random.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793706#comment-13793706
 ] 

Jonathan Ellis commented on CASSANDRA-5201:
---

How common is hadoop2 usage now?  Can we drop hadoop1 for 2.1?  /cc 
[~bcoverston]

 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793707#comment-13793707
 ] 

Jonathan Ellis commented on CASSANDRA-6180:
---

I don't think this is semantically correct.  UPDATE foo SET x = null means 
set x to a tombstone, which is not the same as setting it to an empty byte[].

/cc [~alexliu68]

 NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null 
 values
 

 Key: CASSANDRA-6180
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Pig, CqlStorage
Reporter: Henning Kropp
 Attachments: null_test.pig, patch.txt, test_null.cql, test_null_data


 I encountered an issue with the {{CqlStorage}} and it's handling of null 
 values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
 related ticket CASSANDRA-5885 and applied the there stated fix to the 
 {{AbstractCassandraStorage}}.
 Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
 {{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}
 This issue can be reproduced with the attached files: {{test_null.cql}}, 
 {{test_null_data}}, {{null_test.pig}}
 A fix can be found in the attached patch.
 {code}
 java.io.IOException: java.lang.NullPointerException
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
   at 
 org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6097) nodetool repair randomly hangs.

2013-10-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793708#comment-13793708
 ] 

Brandon Williams commented on CASSANDRA-6097:
-

After this patch, I could not reproduce the problem after 10K iterations.

 nodetool repair randomly hangs.
 ---

 Key: CASSANDRA-6097
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6097
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax AMI
Reporter: J.B. Langston
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6097-1.2.txt, dse.stack, nodetool.stack


 nodetool repair randomly hangs. This is not the same issue where repair hangs 
 if a stream is disrupted. This can be reproduced on a single-node cluster 
 where no streaming takes place, so I think this may be a JMX connection or 
 timeout issue. Thread dumps show that nodetool is waiting on a JMX response 
 and there are no repair-related threads running in Cassandra. Nodetool main 
 thread waiting for JMX response:
 {code}
 main prio=5 tid=7ffa4b001800 nid=0x10aedf000 in Object.wait() [10aede000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.tools.RepairRunner.repairAndWait(NodeProbe.java:976)
   at 
 org.apache.cassandra.tools.NodeProbe.forceRepairAsync(NodeProbe.java:221)
   at 
 org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:1444)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1213)
 {code}
 When nodetool hangs, it does not print out the following message:
 Starting repair command #XX, repairing 1 ranges for keyspace XXX
 However, Cassandra logs that repair in system.log:
 1380033480.95  INFO [Thread-154] 10:38:00,882 Starting repair command #X, 
 repairing X ranges for keyspace XXX
 This suggests that the repair command was received by Cassandra but the 
 connection then failed and nodetool didn't receive a response.
 Obviously, running repair on a single-node cluster is pointless but it's the 
 easiest way to demonstrate this problem. The customer who reported this has 
 also seen the issue on his real multi-node cluster.
 Steps to reproduce:
 Note: I reproduced this once on the official DataStax AMI with DSE 3.1.3 
 (Cassandra 1.2.6+patches).  I was unable to reproduce on my Mac using the 
 same version, and subsequent attempts to reproduce it on the AMI were 
 unsuccessful. The customer says he is able is able to reliably reproduce on 
 his Mac using DSE 3.1.3 and occasionally reproduce it on his real cluster. 
 1) Deploy an AMI using the DataStax AMI at 
 https://aws.amazon.com/amis/datastax-auto-clustering-ami-2-2
 2) Create a test keyspace
 {code}
 create keyspace test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 {code}
 3) Run an endless loop that runs nodetool repair repeatedly:
 {code}
 while true; do nodetool repair -pr test; done
 {code}
 4) Wait until repair hangs. It may take many tries; the behavior is random.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6097) nodetool repair randomly hangs.

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793710#comment-13793710
 ] 

Jonathan Ellis commented on CASSANDRA-6097:
---

(Unless this was already fine in 2.0, we should tag fixversion appropriately.)

 nodetool repair randomly hangs.
 ---

 Key: CASSANDRA-6097
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6097
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax AMI
Reporter: J.B. Langston
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6097-1.2.txt, dse.stack, nodetool.stack


 nodetool repair randomly hangs. This is not the same issue where repair hangs 
 if a stream is disrupted. This can be reproduced on a single-node cluster 
 where no streaming takes place, so I think this may be a JMX connection or 
 timeout issue. Thread dumps show that nodetool is waiting on a JMX response 
 and there are no repair-related threads running in Cassandra. Nodetool main 
 thread waiting for JMX response:
 {code}
 main prio=5 tid=7ffa4b001800 nid=0x10aedf000 in Object.wait() [10aede000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7f90d62e8 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.tools.RepairRunner.repairAndWait(NodeProbe.java:976)
   at 
 org.apache.cassandra.tools.NodeProbe.forceRepairAsync(NodeProbe.java:221)
   at 
 org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:1444)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1213)
 {code}
 When nodetool hangs, it does not print out the following message:
 Starting repair command #XX, repairing 1 ranges for keyspace XXX
 However, Cassandra logs that repair in system.log:
 1380033480.95  INFO [Thread-154] 10:38:00,882 Starting repair command #X, 
 repairing X ranges for keyspace XXX
 This suggests that the repair command was received by Cassandra but the 
 connection then failed and nodetool didn't receive a response.
 Obviously, running repair on a single-node cluster is pointless but it's the 
 easiest way to demonstrate this problem. The customer who reported this has 
 also seen the issue on his real multi-node cluster.
 Steps to reproduce:
 Note: I reproduced this once on the official DataStax AMI with DSE 3.1.3 
 (Cassandra 1.2.6+patches).  I was unable to reproduce on my Mac using the 
 same version, and subsequent attempts to reproduce it on the AMI were 
 unsuccessful. The customer says he is able is able to reliably reproduce on 
 his Mac using DSE 3.1.3 and occasionally reproduce it on his real cluster. 
 1) Deploy an AMI using the DataStax AMI at 
 https://aws.amazon.com/amis/datastax-auto-clustering-ami-2-2
 2) Create a test keyspace
 {code}
 create keyspace test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 {code}
 3) Run an endless loop that runs nodetool repair repeatedly:
 {code}
 while true; do nodetool repair -pr test; done
 {code}
 4) Wait until repair hangs. It may take many tries; the behavior is random.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6092) Leveled Compaction after ALTER TABLE creates pending but does not actually begin

2013-10-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793717#comment-13793717
 ] 

Yuki Morishita commented on CASSANDRA-6092:
---

So we are not adding functionality to break up huge single SSTable when 
switching to LCS, right?
(I'm not a huge fan of it since it adds complexity to picking LCS candidates 
imho.)

 Leveled Compaction after ALTER TABLE creates pending but does not actually 
 begin
 

 Key: CASSANDRA-6092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.10
 Oracle Java 1.7.0_u40
 RHEL6.4
Reporter: Karl Mueller
Assignee: Jonathan Ellis
 Fix For: 1.2.11

 Attachments: 6092.txt


 Running Cassandra 1.2.10.  N=5, RF=3
 On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
 a single, large sstable.
 There's no activity on the table at the time of the ALTER command. I changed 
 it to Leveled Compaction with the command below.
 cqlsh:ProductGenomeDev alter table Node with compaction = { 'class' : 
 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
 Log entries confirm the change happened.
 [...]column_metadata={},compactionStrategyClass=class 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
  [...]
 nodetool compactionstats shows pending compactions, but there's no activity:
 pending tasks: 750
 12 hours later, nothing has still happened, same number pending. The 
 expectation would be that compactions would proceed immediately to convert 
 everything to Leveled Compaction as soon as the ALTER TABLE command goes.
 I try a simple write into the CF, and then flush the nodes. This kicks off 
 compaction on 3 nodes. (RF=3)
 cqlsh:ProductGenomeDev insert into Node (key, column1, value) values 
 ('test123', 'test123', 'test123');
 cqlsh:ProductGenomeDev select * from Node where key = 'test123';
  key | column1 | value
 -+-+-
  test123 | test123 | test123
 cqlsh:ProductGenomeDev delete from Node where key = 'test123';
 After a flush on every node, now I see:
 [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
 *** dev-cass00 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass04 (0) ***
 pending tasks: 752
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  341881
 643290447928 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass01 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass02 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3374975141
 642764512481 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass03 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3591320948
 643017643573 bytes 0.56%
 Active compaction remaining time :n/a
 After inserting and deleting more columns, enough that all nodes have new 
 data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Steven Lowenthal (JIRA)
Steven Lowenthal created CASSANDRA-6190:
---

 Summary: Cassandra 2.0 won't start up on some platforms with Java 
7u40
 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal


Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 version 
of 7, we encounter the error below. I tried 7u25 (the previous release) and it 
functioned correctly.

ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6092) Leveled Compaction after ALTER TABLE creates pending but does not actually begin

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6092:
--

Reviewer: Yuki Morishita  (was: Tyler Hobbs)
Priority: Minor  (was: Major)

Hmm.  I agree, but I guess that leaves us with the pending compactions 
returned by the mbean?

 Leveled Compaction after ALTER TABLE creates pending but does not actually 
 begin
 

 Key: CASSANDRA-6092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6092
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.10
 Oracle Java 1.7.0_u40
 RHEL6.4
Reporter: Karl Mueller
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.11

 Attachments: 6092.txt


 Running Cassandra 1.2.10.  N=5, RF=3
 On this Column Family (ProductGenomeDev/Node), it's been major compacted into 
 a single, large sstable.
 There's no activity on the table at the time of the ALTER command. I changed 
 it to Leveled Compaction with the command below.
 cqlsh:ProductGenomeDev alter table Node with compaction = { 'class' : 
 'LeveledCompactionStrategy', 'sstable_size_in_mb' : 160 };
 Log entries confirm the change happened.
 [...]column_metadata={},compactionStrategyClass=class 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={sstable_size_in_mb=160}
  [...]
 nodetool compactionstats shows pending compactions, but there's no activity:
 pending tasks: 750
 12 hours later, nothing has still happened, same number pending. The 
 expectation would be that compactions would proceed immediately to convert 
 everything to Leveled Compaction as soon as the ALTER TABLE command goes.
 I try a simple write into the CF, and then flush the nodes. This kicks off 
 compaction on 3 nodes. (RF=3)
 cqlsh:ProductGenomeDev insert into Node (key, column1, value) values 
 ('test123', 'test123', 'test123');
 cqlsh:ProductGenomeDev select * from Node where key = 'test123';
  key | column1 | value
 -+-+-
  test123 | test123 | test123
 cqlsh:ProductGenomeDev delete from Node where key = 'test123';
 After a flush on every node, now I see:
 [cassandra@dev-cass00 ~]$ cas exec nt compactionstats
 *** dev-cass00 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass04 (0) ***
 pending tasks: 752
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  341881
 643290447928 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass01 (0) ***
 pending tasks: 750
 Active compaction remaining time :n/a
 *** dev-cass02 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3374975141
 642764512481 bytes 0.53%
 Active compaction remaining time :n/a
 *** dev-cass03 (0) ***
 pending tasks: 751
   compaction typekeyspace   column family   completed 
   total  unit  progress
CompactionProductGenomeDevNode  3591320948
 643017643573 bytes 0.56%
 Active compaction remaining time :n/a
 After inserting and deleting more columns, enough that all nodes have new 
 data, and flushing, now compactions are proceeding on all nodes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2013-10-13 Thread Mck SembWever (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792563#comment-13792563
 ] 

Mck SembWever edited comment on CASSANDRA-5201 at 10/13/13 5:37 PM:


I've updated the github project so to be a 
[patch|https://github.com/michaelsembwever/cassandra-hadoop/commit/6d7555ea205354a606907e40c16db35072004594]
 off the InputFormat and OutputFormat classes as found in cassandra-1.2.10
It works against hadoop-0.22.0


was (Author: michaelsembwever):
I've updated the github project so to be a patch off the InputFormat and 
OutputFormat classes as found in cassandra-1.2.10
It works against hadoop-0.22.0

 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)
Tomas Salfischberger created CASSANDRA-6191:
---

 Summary: Memory exhaustion with large number of compressed SSTables
 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
Java: Oracle 1.7.0_25
Cassandra: 1.2.10
Memory: 24GB
Heap: 8GB
Reporter: Tomas Salfischberger


Not sure bug is the right description, because I can't say for sure that the 
large number of SSTables is the cause of the memory issues. I'll share my 
research so far:

Under high read-load with a very large number of compressed SSTables (caused by 
the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
without any room for GC to fix this. It tries to GC but doesn't reclaim much.

The node first hits the emergency valves flushing all memtables, then 
reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
or crashes with OutOfMemoryError.

I've taken a heapdump and started analysis to find out what's wrong. The memory 
seems to be used by the byte[] backing the HeapByteBuffer in the compressed 
field of org.apache.cassandra.io.compress.CompressedRandomAccessReader. The 
byte[] are generally 65536 byes in size, matching the block-size of the 
compression.

Looking further in the heap-dump I can see that these readers are part of the 
pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
The dump-file lists 45248 instances of CompressedRandomAccessReader.

Is this intended to go this way? Is there a leak somewhere? Or should there be 
an alternative strategy and/or warning for cases where a node is trying to read 
far too many SSTables?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793725#comment-13793725
 ] 

Jonathan Ellis commented on CASSANDRA-6190:
---

So 7u40 doesn't work on Ubuntu, but does work on OS X?

Is this Oracle JDK or OpenJDK?

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Salfischberger updated CASSANDRA-6191:


Description: 
Not sure bug is the right description, because I can't say for sure that the 
large number of SSTables is the cause of the memory issues. I'll share my 
research so far:

Under high read-load with a very large number of compressed SSTables (caused by 
the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
without any room for GC to fix this. It tries to GC but doesn't reclaim much.

The node first hits the emergency valves flushing all memtables, then 
reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
or crashes with OutOfMemoryError.

I've taken a heapdump and started analysis to find out what's wrong. The memory 
seems to be used by the byte[] backing the HeapByteBuffer in the compressed 
field of org.apache.cassandra.io.compress.CompressedRandomAccessReader. The 
byte[] are generally 65536 byes in size, matching the block-size of the 
compression.

Looking further in the heap-dump I can see that these readers are part of the 
pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
The dump-file lists 45248 instances of CompressedRandomAccessReader.

Is this intended to go this way? Is there a leak somewhere? Or should there be 
an alternative strategy and/or warning for cases where a node is trying to read 
far too many SSTables?

EDIT:
Searching through the code I found that PoolingSegmentedFile keeps a pool of 
RandomAccessReader for re-use. While the CompressedRandomAccessReader allocates 
a ByteBuffer in it's constructor and (to make things worse) enlarges it if it's 
reasing a large chunk. This (sometimes enlarged) ByteBuffer is then kept alive 
because it becomes part of the CompressedRandomAccessReader which is in turn 
kept alive as part of the pool in the PoolingSegmentedFile.

  was:
Not sure bug is the right description, because I can't say for sure that the 
large number of SSTables is the cause of the memory issues. I'll share my 
research so far:

Under high read-load with a very large number of compressed SSTables (caused by 
the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
without any room for GC to fix this. It tries to GC but doesn't reclaim much.

The node first hits the emergency valves flushing all memtables, then 
reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
or crashes with OutOfMemoryError.

I've taken a heapdump and started analysis to find out what's wrong. The memory 
seems to be used by the byte[] backing the HeapByteBuffer in the compressed 
field of org.apache.cassandra.io.compress.CompressedRandomAccessReader. The 
byte[] are generally 65536 byes in size, matching the block-size of the 
compression.

Looking further in the heap-dump I can see that these readers are part of the 
pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
The dump-file lists 45248 instances of CompressedRandomAccessReader.

Is this intended to go this way? Is there a leak somewhere? Or should there be 
an alternative strategy and/or warning for cases where a node is trying to read 
far too many SSTables?


 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool 

[jira] [Resolved] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6191.
---

Resolution: Duplicate

Yes, this is a leak of sorts in that there is no cap on the number of CRAR 
buffers we'll cache.  Workaround for 1.2.x is to increase the sstable size 
(which you should do anyway for performance -- CASSANDRA-5727).

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793729#comment-13793729
 ] 

Brandon Williams commented on CASSANDRA-6190:
-

Oracle 7u40 runs just fine on Debian, I can't imagine it wouldn't work on 
Ubuntu.

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Steven Lowenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793731#comment-13793731
 ] 

Steven Lowenthal commented on CASSANDRA-6190:
-

This is Oracle JDK.   It works on OSX.  I checked system.log and jconsole on 
mac to ensure that it's picked up, and we are running 7u40, and both are true.

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793735#comment-13793735
 ] 

Tomas Salfischberger commented on CASSANDRA-6191:
-

Ok, I've indeed done that based on the recommended size in CASSANDRA-5727 (and 
at that point I ran into CASSANDRA-6191 :-)). On IRC rcoli reported he will 
write a blog post documenting the LCS with 5mb - STS - CSV with 256mb route.

However, I think we might still want to create something that un-references the 
buffer when the reader gets added to the pool? Or a WARN message in the logs 
when we're opening tens of thousands of readers? Because this is very hard to 
find out without looking at the code which we can't expect normal users to do?

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6192) Ignore gc_grace when all replicas have ACKed the delete

2013-10-13 Thread JIRA
André Cruz created CASSANDRA-6192:
-

 Summary: Ignore gc_grace when all replicas have ACKed the delete
 Key: CASSANDRA-6192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6192
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz
Priority: Critical


When a client issues a delete with a consistency level = QUORUM, all replicas 
are contacted, even though the coordinator may return the answer before all 
responses arrive if QUORUM. Therefore, in the usual case when all replicas are 
alive, the coordinator will know when a delete has been ACKed by all replicas 
responsible for that data. In this situation I think it would be beneficial if 
the coordinator could notify the replicas that that tombstone is safe to purge 
on the next compaction, regardless of the gc_grace value.

This would make tombstones disappear much faster than they normally would.





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6193) Move throughput-heavy activities (repair/compaction) into separate process

2013-10-13 Thread JIRA
André Cruz created CASSANDRA-6193:
-

 Summary: Move throughput-heavy activities (repair/compaction) into 
separate process
 Key: CASSANDRA-6193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6193
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz


Repairs and compactions are activities that I've seen cause Full GCs. It is 
difficult to optimize the GC for pauseless behaviour when the jvm is performing 
such different functions as serving client requests and processing large data 
files.
Wouldn't it be possible to separate repairs/compactions into another process 
where Full GCs would not be a problem?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793735#comment-13793735
 ] 

Tomas Salfischberger edited comment on CASSANDRA-6191 at 10/13/13 6:20 PM:
---

Ok, I've indeed done that based on the recommended size in CASSANDRA-5727 (and 
at that point I ran into CASSANDRA-6191 :-)). On IRC rcoli reported he will 
write a blog post documenting the LCS with 5mb - STS - CSV with 256mb route.

However, I think we might still want to create something that un-references the 
buffer when the reader gets added to the pool? Or a WARN message in the logs 
when we're opening tens of thousands of readers? Because this is very hard to 
find out without looking at the code which we can't expect normal users to do?

Edit: Ah, CASSANDRA-5661 has a method to close them. That's of course the best 
fix. Any changes of back-porting to 1.2?


was (Author: t0mas):
Ok, I've indeed done that based on the recommended size in CASSANDRA-5727 (and 
at that point I ran into CASSANDRA-6191 :-)). On IRC rcoli reported he will 
write a blog post documenting the LCS with 5mb - STS - CSV with 256mb route.

However, I think we might still want to create something that un-references the 
buffer when the reader gets added to the pool? Or a WARN message in the logs 
when we're opening tens of thousands of readers? Because this is very hard to 
find out without looking at the code which we can't expect normal users to do?

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6192) Ignore gc_grace when all replicas have ACKed the delete

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6192.
---

Resolution: Duplicate

See comments on 3620 starting with 
https://issues.apache.org/jira/browse/CASSANDRA-3620?focusedCommentId=13561779page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13561779

 Ignore gc_grace when all replicas have ACKed the delete
 ---

 Key: CASSANDRA-6192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6192
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz
Priority: Critical

 When a client issues a delete with a consistency level = QUORUM, all 
 replicas are contacted, even though the coordinator may return the answer 
 before all responses arrive if QUORUM. Therefore, in the usual case when all 
 replicas are alive, the coordinator will know when a delete has been ACKed by 
 all replicas responsible for that data. In this situation I think it would be 
 beneficial if the coordinator could notify the replicas that that tombstone 
 is safe to purge on the next compaction, regardless of the gc_grace value.
 This would make tombstones disappear much faster than they normally would.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793741#comment-13793741
 ] 

Jonathan Ellis commented on CASSANDRA-3620:
---

bq. a delivered hint can undo a deletion - if we gc it away too fast

This made sense to me at the time but six months later it's not obvious.  
Shouldn't hint creation (only done if a write is unsuccessful) and early 
delete gcgs short-circuit (only done if all writes are successful) be mutually 
exclusive?

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
Assignee: Aleksey Yeschenko
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0 beta 1

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is extended to include the possible enhancements to repair 
 described.
 NOTES
 * If a node goes down for a prolonged period, the worst that can happen is 
 that some tombstones are recreated across the cluster when it restarts, which 
 does not corrupt data (and this will 

[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793742#comment-13793742
 ] 

Patrick McFadin commented on CASSANDRA-6190:


Just tried c* 2.0.1 using Oracle JDK 7u40 on CentOS 6.4. Works without 
modification. 

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6188) The JMX stats for Speculative Retry stops moving during a node failure outage period

2013-10-13 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793744#comment-13793744
 ] 

Ryan McGuire commented on CASSANDRA-6188:
-

Yea, [my test still looks similar to 
before|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.6188.eager_retry.jsonmetric=interval_op_rateoperation=stress-readsmoothing=1]

 The JMX stats for Speculative Retry stops moving during a node failure outage 
 period
 

 Key: CASSANDRA-6188
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6188
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: One data center of 4 Cassandra-2.0 nodes with default 
 configuration parameters deployed on 4 separate machines. A testing app (with 
 either Astyanax or DataStax client driver) interacts with the Cassandra data 
 center. A traffic generator is sending traffic to the testing app for testing 
 purpose.
Reporter: Li Zou
Assignee: Ryan McGuire

 Under normal testing traffic level with default Cassandra Speculative Retry 
 configuration for each table (i.e. 99 percentile), JConsole shows that the 
 JMX stats for Speculative Retry increments slowly. However, during the node 
 failure outage period (i.e. immediately after the node was killed and before 
 the gossip figures out that the node is down), JConsole shows that the JMX 
 stats for Speculative Retry stops moving. That is, for around 20 seconds, the 
 JMX stats for Speculative Retry does not move.
 This is true for all other Speculative Retry options. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793745#comment-13793745
 ] 

Jonathan Ellis commented on CASSANDRA-6191:
---

bq. at that point I ran into CASSANDRA-6191

Did you mean to link something else?

bq. Any changes of back-porting to 1.2

I'm afraid not; it's a bit involved (and in fact caused a regression in 
CASSANDRA-6149), so we're being cautious with 1.2.

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6193) Move throughput-heavy activities (repair/compaction) into separate process

2013-10-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6193:
--

Priority: Minor  (was: Major)
  Labels: ponies  (was: )

It's not crazy, but it's a lot of code to write.

 Move throughput-heavy activities (repair/compaction) into separate process
 --

 Key: CASSANDRA-6193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6193
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz
Priority: Minor
  Labels: ponies

 Repairs and compactions are activities that I've seen cause Full GCs. It is 
 difficult to optimize the GC for pauseless behaviour when the jvm is 
 performing such different functions as serving client requests and processing 
 large data files.
 Wouldn't it be possible to separate repairs/compactions into another process 
 where Full GCs would not be a problem?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793747#comment-13793747
 ] 

Jonathan Ellis commented on CASSANDRA-6190:
---

I wonder if Steve has a 1.6 jvm installed that is being picked up instead.  
(Especially since he mentions tarballs.)

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793752#comment-13793752
 ] 

Brandon Williams commented on CASSANDRA-6190:
-

That would be my guess.  Steve, can you paste the output of 'dpkg -l | grep 
jre' and if you have results, trying removing all those packages so there can 
only be one java install on the system? (the oracle one, which I assume is from 
a tarball)

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793751#comment-13793751
 ] 

Tomas Salfischberger commented on CASSANDRA-6191:
-

bq. Did you mean to link something else?

Oops, I meant: CASSANDRA-6092

bq. I'm afraid not; it's a bit involved (and in fact caused a regression in 
CASSANDRA-6149), so we're being cautious with 1.2.

How about a marker interface (something like RecycleAwareRandomAccessReader) on 
CompressedRandomAccessReader which is checked in PoolingSegmentedFile.recycle() 
where we call RecycleAwareRandomAccessReader.recycle() so we can set the 
reference to the ByteBuffer to null. Then add a simple check in 
CompressedRandomAccessReader.decompressChunk() and re-allocate the buffer if 
necessary?

Or will this cause too much unreference and re-allocate traffic on the 
ByteBuffer during startup? (Not sure how the re-use flow from the pool in 
PoolingSegmentedFile goes)

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793753#comment-13793753
 ] 

Ryan McGuire commented on CASSANDRA-6190:
-

[~slowenthal] Put an 'echo $JAVA' as the first line of the launch_service() 
function in bin/cassandra and you can verify that you're using the right java.

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793754#comment-13793754
 ] 

Jonathan Ellis commented on CASSANDRA-6191:
---

bq. I meant: CASSANDRA-6092

Ah, I see.

Honestly I'm pretty okay with use 160MB sstables, and fix 6092 as a plan.  Is 
that unworkable?

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6194) speculative retry can sometimes violate consistency

2013-10-13 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-6194:
---

 Summary: speculative retry can sometimes violate consistency
 Key: CASSANDRA-6194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6194
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
 Fix For: 2.0.2


This is most evident with intermittent failures of the short_read dtests.  I'll 
focus on short_read_reversed_test for explanation, since that's what I used to 
bisect.  This test inserts some columns into a row, then deletes a subset, but 
it performs each delete on a different node, with another node down (hints are 
disabled.)  Finally it reads the row back at QUORUM and checks that it doesn't 
see any deleted columns, however with speculative retry on this often fails.  I 
bisected this to the change that made 99th percentile SR the default reliably 
by looping the test enough times at each iteration to be sure it was passing or 
failing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6194) speculative retry can sometimes violate consistency

2013-10-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793755#comment-13793755
 ] 

Brandon Williams commented on CASSANDRA-6194:
-

Unfortunately I haven't been able to repro manually and ccm has a problem with 
logging at debug on 2.0+ currently.  10 iterations of the test should be enough 
to trigger, though I bisected again to the same point with 30.

 speculative retry can sometimes violate consistency
 ---

 Key: CASSANDRA-6194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6194
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
 Fix For: 2.0.2


 This is most evident with intermittent failures of the short_read dtests.  
 I'll focus on short_read_reversed_test for explanation, since that's what I 
 used to bisect.  This test inserts some columns into a row, then deletes a 
 subset, but it performs each delete on a different node, with another node 
 down (hints are disabled.)  Finally it reads the row back at QUORUM and 
 checks that it doesn't see any deleted columns, however with speculative 
 retry on this often fails.  I bisected this to the change that made 99th 
 percentile SR the default reliably by looping the test enough times at each 
 iteration to be sure it was passing or failing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Tomas Salfischberger (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793759#comment-13793759
 ] 

Tomas Salfischberger commented on CASSANDRA-6191:
-

Worked for me, but I think there is a very large number of 1.2 instances out 
there with LCS set to 5mb from the 1.0 and 1.1 time. All of those will at some 
point run into this issue and without a visible cause start hitting the 
emergency-flush and start decreasing caches. So to help those users we could 
create a basic work-around, or add a big warning in the documentation or log a 
warning message when a large number of readers is opened? Just to make sure 
they don't run into the unclear OOM situation and conclude Cassandra is unable 
to handle a few hundred GB of data while it's actually just a poorly chosen 
default config that has already been fixed.

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6193) Move throughput-heavy activities (repair/compaction) into separate process

2013-10-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793760#comment-13793760
 ] 

Brandon Williams commented on CASSANDRA-6193:
-

It's complicated at the system level, too.  We can't just fork the process, 
because that will require as much free memory to fork as the main process 
(which is why you need JNA or snapshotting is extremely slow because the 
forking of 'ln' is so expensive.)  And then if one jvm OOMs but the other does 
not...

 Move throughput-heavy activities (repair/compaction) into separate process
 --

 Key: CASSANDRA-6193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6193
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz
Priority: Minor
  Labels: ponies

 Repairs and compactions are activities that I've seen cause Full GCs. It is 
 difficult to optimize the GC for pauseless behaviour when the jvm is 
 performing such different functions as serving client requests and processing 
 large data files.
 Wouldn't it be possible to separate repairs/compactions into another process 
 where Full GCs would not be a problem?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6193) Move throughput-heavy activities (repair/compaction) into separate process

2013-10-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793764#comment-13793764
 ] 

André Cruz commented on CASSANDRA-6193:
---

I was imagining 2 separate long-running processes. No need to fork(). If the 
compactor were to fail, the number of pending compactions would increase, as 
happens today.

But they would both have to be managed, and should be running.

 Move throughput-heavy activities (repair/compaction) into separate process
 --

 Key: CASSANDRA-6193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6193
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: André Cruz
Priority: Minor
  Labels: ponies

 Repairs and compactions are activities that I've seen cause Full GCs. It is 
 difficult to optimize the GC for pauseless behaviour when the jvm is 
 performing such different functions as serving client requests and processing 
 large data files.
 Wouldn't it be possible to separate repairs/compactions into another process 
 where Full GCs would not be a problem?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6186) Can't add index with a name prefixed with 'index'

2013-10-13 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793777#comment-13793777
 ] 

Nick Bailey commented on CASSANDRA-6186:


Well the index *shouldn't* have existed because I dropped the keyspace. The 
prefix thing was just a guess on my part.

Having said that, I'm running against the head of the 2.0 branch and can't 
reproduce now. If I figure out how it got in that state I'll reopen.

 Can't add index with a name prefixed with 'index'
 -

 Key: CASSANDRA-6186
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6186
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.0.2


 cqlsh code:
 {noformat}
 cqlsh drop keyspace test_add_index;
 cqlsh CREATE KEYSPACE test_add_index WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': 1} ;
 cqlsh create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
 cqlsh create index index1 on test_add_index.cf1 (c);
 Bad Request: Duplicate index name index1
 cqlsh drop keyspace test_add_index;
 cqlsh CREATE KEYSPACE test_add_index WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': 1} ;
 cqlsh create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
 cqlsh create index blah on test_add_index.cf1 (c);
 cqlsh
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6186) Can't add index with a name prefixed with 'index'

2013-10-13 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey resolved CASSANDRA-6186.


Resolution: Cannot Reproduce

 Can't add index with a name prefixed with 'index'
 -

 Key: CASSANDRA-6186
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6186
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.0.2


 cqlsh code:
 {noformat}
 cqlsh drop keyspace test_add_index;
 cqlsh CREATE KEYSPACE test_add_index WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': 1} ;
 cqlsh create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
 cqlsh create index index1 on test_add_index.cf1 (c);
 Bad Request: Duplicate index name index1
 cqlsh drop keyspace test_add_index;
 cqlsh CREATE KEYSPACE test_add_index WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': 1} ;
 cqlsh create table test_add_index.cf1 (a text PRIMARY KEY, b text , c text );
 cqlsh create index blah on test_add_index.cf1 (c);
 cqlsh
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6195) Typo in error msg in cqlsh: Bad Request: Only superusers are allowed to perfrom CREATE USER queries

2013-10-13 Thread Hari Sekhon (JIRA)
Hari Sekhon created CASSANDRA-6195:
--

 Summary: Typo in error msg in cqlsh: Bad Request: Only superusers 
are allowed to perfrom CREATE USER queries
 Key: CASSANDRA-6195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Hari Sekhon
Priority: Trivial


Typo in error message perfrom instead of perform:

cqlsh
Connected to MyCluster1 at x.x.x.x:9160.
[cqlsh 4.0.1 | Cassandra 2.0.1 | CQL spec 3.0.0 | Thrift protocol 19.37.0]
Use HELP for help.
cqlsh create user hari with password 'mypass';
Bad Request: Only superusers are allowed to perfrom CREATE USER queries



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Steven Lowenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793799#comment-13793799
 ] 

Steven Lowenthal commented on CASSANDRA-6190:
-

I've even tried it by explicitly setting JAVA_HOME to point at my various 
javas.  Remember - if we aren't using Java 7, we don't fall into the code that 
adds that parameter.   Java 7 gets put in the system.log.

 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-13 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6196:
-

 Summary: Add compaction, compression to cqlsh tab completion for 
CREATE TABLE
 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.2






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables

2013-10-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793816#comment-13793816
 ] 

Jonathan Ellis commented on CASSANDRA-6191:
---

You're automatically upgraded to 160 unless you explicitly set it to something 
else.

 Memory exhaustion with large number of compressed SSTables
 --

 Key: CASSANDRA-6191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: Debian 7.1
 Java: Oracle 1.7.0_25
 Cassandra: 1.2.10
 Memory: 24GB
 Heap: 8GB
Reporter: Tomas Salfischberger

 Not sure bug is the right description, because I can't say for sure that 
 the large number of SSTables is the cause of the memory issues. I'll share my 
 research so far:
 Under high read-load with a very large number of compressed SSTables (caused 
 by the initial default 5mb sstable_size in LCS) it seems memory is exhausted, 
 without any room for GC to fix this. It tries to GC but doesn't reclaim much.
 The node first hits the emergency valves flushing all memtables, then 
 reducing caches. And finally logs 0.99+ heap usages and hangs with GC failure 
 or crashes with OutOfMemoryError.
 I've taken a heapdump and started analysis to find out what's wrong. The 
 memory seems to be used by the byte[] backing the HeapByteBuffer in the 
 compressed field of 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader. The byte[] are 
 generally 65536 byes in size, matching the block-size of the compression.
 Looking further in the heap-dump I can see that these readers are part of the 
 pool in org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is 
 linked to the dfile field of org.apache.cassandra.io.sstable.SSTableReader. 
 The dump-file lists 45248 instances of CompressedRandomAccessReader.
 Is this intended to go this way? Is there a leak somewhere? Or should there 
 be an alternative strategy and/or warning for cases where a node is trying to 
 read far too many SSTables?
 EDIT:
 Searching through the code I found that PoolingSegmentedFile keeps a pool of 
 RandomAccessReader for re-use. While the CompressedRandomAccessReader 
 allocates a ByteBuffer in it's constructor and (to make things worse) 
 enlarges it if it's reasing a large chunk. This (sometimes enlarged) 
 ByteBuffer is then kept alive because it becomes part of the 
 CompressedRandomAccessReader which is in turn kept alive as part of the pool 
 in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[3/5] git commit: typos

2013-10-13 Thread brandonwilliams
typos


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1353e0a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1353e0a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1353e0a3

Branch: refs/heads/trunk
Commit: 1353e0a3719a4dc30b7a59f468a58340416cdca2
Parents: 2531424
Author: Brandon Williams brandonwilli...@apache.org
Authored: Sun Oct 13 17:52:28 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Sun Oct 13 17:53:10 2013 -0500

--
 .../org/apache/cassandra/cql3/statements/CreateUserStatement.java  | 2 +-
 .../org/apache/cassandra/cql3/statements/DropUserStatement.java| 2 +-
 src/java/org/apache/cassandra/service/StorageProxy.java| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1353e0a3/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
index df3a5e7..a82b38d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
@@ -57,7 +57,7 @@ public class CreateUserStatement extends 
AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom CREATE USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform CREATE USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1353e0a3/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
index d55566c..0894db0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
@@ -52,7 +52,7 @@ public class DropUserStatement extends AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom DROP USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform DROP USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1353e0a3/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index d932130..349e51b 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -740,7 +740,7 @@ public class StorageProxy implements StorageProxyMBean
 return responseHandler;
 }
 
-// same as above except does not initiate writes (but does perfrom 
availability checks).
+// same as above except does not initiate writes (but does perform 
availability checks).
 private static WriteResponseHandlerWrapper wrapResponseHandler(RowMutation 
mutation, ConsistencyLevel consistency_level, WriteType writeType)
 {
 AbstractReplicationStrategy rs = 
Keyspace.open(mutation.getKeyspaceName()).getReplicationStrategy();



[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-13 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c9a906c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c9a906c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c9a906c4

Branch: refs/heads/cassandra-2.0
Commit: c9a906c473d14f2c51a0541de4388e5dd64231ed
Parents: 3e7ebf8 4284d98
Author: Brandon Williams brandonwilli...@apache.org
Authored: Sun Oct 13 17:52:36 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Sun Oct 13 17:52:36 2013 -0500

--
 .../org/apache/cassandra/cql3/statements/CreateUserStatement.java  | 2 +-
 .../org/apache/cassandra/cql3/statements/DropUserStatement.java| 2 +-
 src/java/org/apache/cassandra/service/StorageProxy.java| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9a906c4/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index cac38a3,f195285..259d2f5
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -736,14 -384,14 +736,14 @@@ public class StorageProxy implements St
  return responseHandler;
  }
  
- // same as above except does not initiate writes (but does perfrom 
availability checks).
+ // same as above except does not initiate writes (but does perform 
availability checks).
  private static WriteResponseHandlerWrapper 
wrapResponseHandler(RowMutation mutation, ConsistencyLevel consistency_level, 
WriteType writeType)
  {
 -AbstractReplicationStrategy rs = 
Table.open(mutation.getTable()).getReplicationStrategy();
 -String table = mutation.getTable();
 +AbstractReplicationStrategy rs = 
Keyspace.open(mutation.getKeyspaceName()).getReplicationStrategy();
 +String keyspaceName = mutation.getKeyspaceName();
  Token tk = StorageService.getPartitioner().getToken(mutation.key());
 -ListInetAddress naturalEndpoints = 
StorageService.instance.getNaturalEndpoints(table, tk);
 -CollectionInetAddress pendingEndpoints = 
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, table);
 +ListInetAddress naturalEndpoints = 
StorageService.instance.getNaturalEndpoints(keyspaceName, tk);
 +CollectionInetAddress pendingEndpoints = 
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, 
keyspaceName);
  AbstractWriteResponseHandler responseHandler = 
rs.getWriteResponseHandler(naturalEndpoints, pendingEndpoints, 
consistency_level, null, writeType);
  return new WriteResponseHandlerWrapper(responseHandler, mutation);
  }



[4/5] git commit: Merge branch 'cassandra-2.0' of https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.0

2013-10-13 Thread brandonwilliams
Merge branch 'cassandra-2.0' of 
https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7c65412
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7c65412
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7c65412

Branch: refs/heads/cassandra-2.0
Commit: a7c6541220f6e3673075284bd5e398caa9641a54
Parents: c9a906c 53b2d9d
Author: Brandon Williams brandonwilli...@apache.org
Authored: Sun Oct 13 17:53:17 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Sun Oct 13 17:53:17 2013 -0500

--
 CHANGES.txt |  2 ++
 NEWS.txt|  3 +-
 .../org/apache/cassandra/config/CFMetaData.java | 10 --
 .../apache/cassandra/db/ColumnFamilyStore.java  |  8 ++---
 .../AbstractSimplePerColumnSecondaryIndex.java  | 19 
 .../apache/cassandra/utils/StatusLogger.java|  2 +-
 .../org/apache/cassandra/stress/Session.java| 32 ++--
 7 files changed, 44 insertions(+), 32 deletions(-)
--




[1/5] git commit: typos

2013-10-13 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 7290abd19 - 1797b49e2
  refs/heads/cassandra-2.0 53b2d9d55 - a7c654122
  refs/heads/trunk 2531424c0 - 1353e0a37


typos


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4284d98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4284d98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4284d98a

Branch: refs/heads/cassandra-2.0
Commit: 4284d98ae4f2cc47af8fead1222ddeb81552b656
Parents: 639c01a
Author: Brandon Williams brandonwilli...@apache.org
Authored: Sun Oct 13 17:52:28 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Sun Oct 13 17:52:28 2013 -0500

--
 .../org/apache/cassandra/cql3/statements/CreateUserStatement.java  | 2 +-
 .../org/apache/cassandra/cql3/statements/DropUserStatement.java| 2 +-
 src/java/org/apache/cassandra/service/StorageProxy.java| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4284d98a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
index df3a5e7..a82b38d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
@@ -57,7 +57,7 @@ public class CreateUserStatement extends 
AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom CREATE USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform CREATE USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4284d98a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
index d55566c..0894db0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
@@ -52,7 +52,7 @@ public class DropUserStatement extends AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom DROP USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform DROP USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4284d98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index cdb0bd6..f195285 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -384,7 +384,7 @@ public class StorageProxy implements StorageProxyMBean
 return responseHandler;
 }
 
-// same as above except does not initiate writes (but does perfrom 
availability checks).
+// same as above except does not initiate writes (but does perform 
availability checks).
 private static WriteResponseHandlerWrapper wrapResponseHandler(RowMutation 
mutation, ConsistencyLevel consistency_level, WriteType writeType)
 {
 AbstractReplicationStrategy rs = 
Table.open(mutation.getTable()).getReplicationStrategy();



[5/5] git commit: typos

2013-10-13 Thread brandonwilliams
typos


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1797b49e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1797b49e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1797b49e

Branch: refs/heads/cassandra-1.2
Commit: 1797b49e215b5710cda540d734a08429840ac788
Parents: 7290abd
Author: Brandon Williams brandonwilli...@apache.org
Authored: Sun Oct 13 17:52:28 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Sun Oct 13 17:54:17 2013 -0500

--
 .../org/apache/cassandra/cql3/statements/CreateUserStatement.java  | 2 +-
 .../org/apache/cassandra/cql3/statements/DropUserStatement.java| 2 +-
 src/java/org/apache/cassandra/service/StorageProxy.java| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1797b49e/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
index df3a5e7..a82b38d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateUserStatement.java
@@ -57,7 +57,7 @@ public class CreateUserStatement extends 
AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom CREATE USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform CREATE USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1797b49e/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
index d55566c..0894db0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DropUserStatement.java
@@ -52,7 +52,7 @@ public class DropUserStatement extends AuthenticationStatement
 public void checkAccess(ClientState state) throws UnauthorizedException
 {
 if (!state.getUser().isSuper())
-throw new UnauthorizedException(Only superusers are allowed to 
perfrom DROP USER queries);
+throw new UnauthorizedException(Only superusers are allowed to 
perform DROP USER queries);
 }
 
 public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1797b49e/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index cdb0bd6..f195285 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -384,7 +384,7 @@ public class StorageProxy implements StorageProxyMBean
 return responseHandler;
 }
 
-// same as above except does not initiate writes (but does perfrom 
availability checks).
+// same as above except does not initiate writes (but does perform 
availability checks).
 private static WriteResponseHandlerWrapper wrapResponseHandler(RowMutation 
mutation, ConsistencyLevel consistency_level, WriteType writeType)
 {
 AbstractReplicationStrategy rs = 
Table.open(mutation.getTable()).getReplicationStrategy();



[jira] [Resolved] (CASSANDRA-6195) Typo in error msg in cqlsh: Bad Request: Only superusers are allowed to perfrom CREATE USER queries

2013-10-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6195.
-

   Resolution: Fixed
Fix Version/s: 2.0.2
   1.2.11
 Assignee: Brandon Williams

Fixed in 4284d98

 Typo in error msg in cqlsh: Bad Request: Only superusers are allowed to 
 perfrom CREATE USER queries
 ---

 Key: CASSANDRA-6195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Hari Sekhon
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.2.11, 2.0.2


 Typo in error message perfrom instead of perform:
 cqlsh
 Connected to MyCluster1 at x.x.x.x:9160.
 [cqlsh 4.0.1 | Cassandra 2.0.1 | CQL spec 3.0.0 | Thrift protocol 19.37.0]
 Use HELP for help.
 cqlsh create user hari with password 'mypass';
 Bad Request: Only superusers are allowed to perfrom CREATE USER queries



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6190) Cassandra 2.0 won't start up on some platforms with Java 7u40

2013-10-13 Thread Steven Lowenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793875#comment-13793875
 ] 

Steven Lowenthal commented on CASSANDRA-6190:
-

Here is a simple test.  Not the FileNotFoundException is good - I ran C* with 
the wrong permissions, so that is the expected result.

Unix Info:
Linux ubuntu 3.8.0-19-generic #29-Ubuntu SMP Wed Apr 17 18:19:42 UTC 2013 i686 
i686 i686 GNU/Linux
u


ubuntu@ubuntu:~$ export JAVA_HOME=~/Downloads/jre1.7.0_40/
ubuntu@ubuntu:~$ cassandra
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M 
-Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

ubuntu@ubuntu:~$ export JAVA_HOME=~/Downloads/jre1.7.0_25/
ubuntu@ubuntu:~$ cassandra
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M 
-Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
ubuntu@ubuntu:~$ log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /var/log/cassandra/system.log (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.init(Unknown Source)


 Cassandra 2.0 won't start up on some platforms with Java 7u40
 -

 Key: CASSANDRA-6190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6190
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Ubuntu 13.04 32- and 64-bit  JDK 7u40  (tried JRE 7u25)
Reporter: Steven Lowenthal

 Java 7u40 on some platforms do not recognize the the -XX:+UseCondCardMark JVM 
 option.  7u40 on Macintosh works correctly,  If I use the tarball 7u40 
 version of 7, we encounter the error below. I tried 7u25 (the previous 
 release) and it functioned correctly.
 ubuntu@ubuntu:~$ Unrecognized VM option 'UseCondCardMark'
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-13 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793923#comment-13793923
 ] 

Alex Liu commented on CASSANDRA-6180:
-

Thrift API TBinaryProtocol.writeBinary method 

{code}
public void writeBinary(ByteBuffer buffer) throws TException
{
writeI32(buffer.remaining());

if (buffer.hasArray())
{
trans_.write(buffer.array(), buffer.position() + 
buffer.arrayOffset(), buffer.remaining());
}
else
{
byte[] bytes = new byte[buffer.remaining()];

int j = 0;
for (int i = buffer.position(); i  buffer.limit(); i++)
{
bytes[j++] = buffer.get(i);
}

trans_.write(bytes);
}
}
{code}

throws NPE if the variable is null instead of a BufferBuffer with empty byte. 
It looks like a bug for thrift execute_prepared_cql3_query which can't handle a 
null variable.

CASSANDRA-5081 may only work for binary protocol.

 NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null 
 values
 

 Key: CASSANDRA-6180
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Pig, CqlStorage
Reporter: Henning Kropp
 Attachments: null_test.pig, patch.txt, test_null.cql, test_null_data


 I encountered an issue with the {{CqlStorage}} and it's handling of null 
 values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
 related ticket CASSANDRA-5885 and applied the there stated fix to the 
 {{AbstractCassandraStorage}}.
 Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
 {{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}
 This issue can be reproduced with the attached files: {{test_null.cql}}, 
 {{test_null_data}}, {{null_test.pig}}
 A fix can be found in the attached patch.
 {code}
 java.io.IOException: java.lang.NullPointerException
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
   at 
 org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-13 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6196:
---

Attachment: cassandra-2.0-6196.patch

The completion for CREATE TABLE options is already there, but doesn't work 
because there are dublicate completers for Properties related stuff.

Attached the patch to remove static Keyspace-only completers, so the dynamic 
ones (keyspace/columnfamily) will work

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-13 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793926#comment-13793926
 ] 

Mikhail Stepura edited comment on CASSANDRA-6196 at 10/14/13 5:48 AM:
--

The completion for CREATE TABLE options is already there, but doesn't work 
because there are duplicate completers for Properties related stuff.

Attached the patch to remove static Keyspace-only completers, so the dynamic 
ones (keyspace/columnfamily) will work


was (Author: mishail):
The completion for CREATE TABLE options is already there, but doesn't work 
because there are dublicate completers for Properties related stuff.

Attached the patch to remove static Keyspace-only completers, so the dynamic 
ones (keyspace/columnfamily) will work

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6180) NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null values

2013-10-13 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793932#comment-13793932
 ] 

Alex Liu commented on CASSANDRA-6180:
-

If we do need use  ByteBuffer.wrap(new byte[0]) to fix the issue, this should 
only apply to CqlStorage, CassandraStorage should stay the old way.

 NPE in CqlRecordWriter: Related to AbstractCassandraStorage handling null 
 values
 

 Key: CASSANDRA-6180
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6180
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Pig, CqlStorage
Reporter: Henning Kropp
 Attachments: null_test.pig, patch.txt, test_null.cql, test_null_data


 I encountered an issue with the {{CqlStorage}} and it's handling of null 
 values. The {{CqlRecordWriter}} throws an NPE when a value is null. I found a 
 related ticket CASSANDRA-5885 and applied the there stated fix to the 
 {{AbstractCassandraStorage}}.
 Instead of converting {{null}} values to {{ByteBuffer.wrap(new byte[0])}} 
 {{AbstractCassandraStorage}} returns {{(ByteBuffer)null}}
 This issue can be reproduced with the attached files: {{test_null.cql}}, 
 {{test_null_data}}, {{null_test.pig}}
 A fix can be found in the attached patch.
 {code}
 java.io.IOException: java.lang.NullPointerException
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.thrift.protocol.TBinaryProtocol.writeBinary(TBinaryProtocol.java:194)
   at 
 org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_args.write(Cassandra.java:41253)
   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_prepared_cql3_query(Cassandra.java:1683)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1673)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)