[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-12 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570877#comment-15570877
 ] 

Jeff Jirsa commented on CASSANDRA-12296:


{quote}
you still get an incorrect error message if you try and rebuild from the same 
DC using NTS from a DC that doesn't contain the keyspace. When I was testing 
this case apparently the error got masked by another keyspace which makes me 
think there may be another bug here 
{quote}

Glad you found a way to trigger it with NTS (good work there). Would personally 
vote for a more generic error rather than multiple different error messages for 
each possible failure situation. [~brandon.williams] - you deal with a lot of 
end users, are you good with what he proposed ( {{Ensure this keyspace has 
replicas in the source datacentre}} )? 

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12784) ReplicationAwareTokenAllocatorTest times out almost every time for 3.X and trunk

2016-10-12 Thread Stefania (JIRA)
Stefania created CASSANDRA-12784:


 Summary: ReplicationAwareTokenAllocatorTest times out almost every 
time for 3.X and trunk
 Key: CASSANDRA-12784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12784
 Project: Cassandra
  Issue Type: Bug
Reporter: Stefania
Assignee: Stefania
 Fix For: 3.x


Example failure: 

http://cassci.datastax.com/view/cassandra-3.X/job/cassandra-3.X_testall/lastCompletedBuild/testReport/org.apache.cassandra.dht.tokenallocator/ReplicationAwareTokenAllocatorTest/testNewClusterWithMurmur3Partitioner/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11534:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Ready to Commit)

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.10
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570478#comment-15570478
 ] 

Stefania commented on CASSANDRA-11534:
--

Thank you for the review. CI results are clean, committed to 3.X as 
a0419085d58403557c81f4c9b784aaa7cf019314 and merged into trunk.

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.10
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: cqlsh fails to format collections when using aliases

2016-10-12 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 541d83707 -> a0419085d
  refs/heads/trunk 9f75e7068 -> 2ab4666cd


cqlsh fails to format collections when using aliases

Patch by Stefania Alborghetti; reviewed by Robert Stupp for CASSANDRA-11534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0419085
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0419085
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0419085

Branch: refs/heads/cassandra-3.X
Commit: a0419085d58403557c81f4c9b784aaa7cf019314
Parents: 541d837
Author: Stefania Alborghetti 
Authored: Mon Sep 12 15:36:43 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Oct 13 09:26:16 2016 +0800

--
 CHANGES.txt |   1 +
 bin/cqlsh.py|  30 +--
 ...driver-internal-only-3.5.0.post0-d8d0456.zip | Bin 245487 -> 0 bytes
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 0 -> 252036 bytes
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 ++
 5 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0419085/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1ade69f..f0df0e6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * cqlsh fails to format collections when using aliases (CASSANDRA-11534)
  * Check for hash conflicts in prepared statements (CASSANDRA-12733)
  * Exit query parsing upon first error (CASSANDRA-12598)
  * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0419085/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 7f8609c..e741752 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -151,6 +151,7 @@ except ImportError, e:
 
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.cluster import Cluster
+from cassandra.cqltypes import cql_typename
 from cassandra.marshal import int64_unpack
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
@@ -1289,7 +1290,7 @@ class Shell(cmd.Cmd):
 elif result:
 # CAS INSERT/UPDATE
 self.writeresult("")
-self.print_static_result(result.column_names, list(result), 
self.parse_for_update_meta(statement.query_string))
+self.print_static_result(result, 
self.parse_for_update_meta(statement.query_string))
 self.flush_output()
 return True, future
 
@@ -1300,19 +1301,17 @@ class Shell(cmd.Cmd):
 if result.has_more_pages and self.tty:
 num_rows = 0
 while True:
-page = result.current_rows
-if page:
-num_rows += len(page)
-self.print_static_result(result.column_names, page, 
table_meta)
+if result.current_rows:
+num_rows += len(result.current_rows)
+self.print_static_result(result, table_meta)
 if result.has_more_pages:
 raw_input("---MORE---")
 result.fetch_next_page()
 else:
 break
 else:
-rows = list(result)
-num_rows = len(rows)
-self.print_static_result(result.column_names, rows, table_meta)
+num_rows = len(result.current_rows)
+self.print_static_result(result, table_meta)
 self.writeresult("(%d rows)" % num_rows)
 
 if self.decoding_errors:
@@ -1322,24 +1321,23 @@ class Shell(cmd.Cmd):
 self.writeresult('%d more decoding errors suppressed.'
  % (len(self.decoding_errors) - 2), color=RED)
 
-def print_static_result(self, column_names, rows, table_meta):
-if not column_names and not table_meta:
+def print_static_result(self, result, table_meta):
+if not result.column_names and not table_meta:
 return
 
-column_names = column_names or table_meta.columns.keys()
+column_names = result.column_names or table_meta.columns.keys()
 formatted_names = [self.myformat_colname(name, table_meta) for name in 
column_names]
-if not rows:
+if not result.current_rows:
 # print header only
 self.print_formatted_result(formatted_names, None)
 return
 
 cql_types = []
-if table_meta:
+if result.column_types:
 

[2/3] cassandra git commit: cqlsh fails to format collections when using aliases

2016-10-12 Thread stefania
cqlsh fails to format collections when using aliases

Patch by Stefania Alborghetti; reviewed by Robert Stupp for CASSANDRA-11534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0419085
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0419085
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0419085

Branch: refs/heads/trunk
Commit: a0419085d58403557c81f4c9b784aaa7cf019314
Parents: 541d837
Author: Stefania Alborghetti 
Authored: Mon Sep 12 15:36:43 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Oct 13 09:26:16 2016 +0800

--
 CHANGES.txt |   1 +
 bin/cqlsh.py|  30 +--
 ...driver-internal-only-3.5.0.post0-d8d0456.zip | Bin 245487 -> 0 bytes
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 0 -> 252036 bytes
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 ++
 5 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0419085/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1ade69f..f0df0e6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * cqlsh fails to format collections when using aliases (CASSANDRA-11534)
  * Check for hash conflicts in prepared statements (CASSANDRA-12733)
  * Exit query parsing upon first error (CASSANDRA-12598)
  * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0419085/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 7f8609c..e741752 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -151,6 +151,7 @@ except ImportError, e:
 
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.cluster import Cluster
+from cassandra.cqltypes import cql_typename
 from cassandra.marshal import int64_unpack
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
@@ -1289,7 +1290,7 @@ class Shell(cmd.Cmd):
 elif result:
 # CAS INSERT/UPDATE
 self.writeresult("")
-self.print_static_result(result.column_names, list(result), 
self.parse_for_update_meta(statement.query_string))
+self.print_static_result(result, 
self.parse_for_update_meta(statement.query_string))
 self.flush_output()
 return True, future
 
@@ -1300,19 +1301,17 @@ class Shell(cmd.Cmd):
 if result.has_more_pages and self.tty:
 num_rows = 0
 while True:
-page = result.current_rows
-if page:
-num_rows += len(page)
-self.print_static_result(result.column_names, page, 
table_meta)
+if result.current_rows:
+num_rows += len(result.current_rows)
+self.print_static_result(result, table_meta)
 if result.has_more_pages:
 raw_input("---MORE---")
 result.fetch_next_page()
 else:
 break
 else:
-rows = list(result)
-num_rows = len(rows)
-self.print_static_result(result.column_names, rows, table_meta)
+num_rows = len(result.current_rows)
+self.print_static_result(result, table_meta)
 self.writeresult("(%d rows)" % num_rows)
 
 if self.decoding_errors:
@@ -1322,24 +1321,23 @@ class Shell(cmd.Cmd):
 self.writeresult('%d more decoding errors suppressed.'
  % (len(self.decoding_errors) - 2), color=RED)
 
-def print_static_result(self, column_names, rows, table_meta):
-if not column_names and not table_meta:
+def print_static_result(self, result, table_meta):
+if not result.column_names and not table_meta:
 return
 
-column_names = column_names or table_meta.columns.keys()
+column_names = result.column_names or table_meta.columns.keys()
 formatted_names = [self.myformat_colname(name, table_meta) for name in 
column_names]
-if not rows:
+if not result.current_rows:
 # print header only
 self.print_formatted_result(formatted_names, None)
 return
 
 cql_types = []
-if table_meta:
+if result.column_types:
 ks_meta = self.conn.metadata.keyspaces[table_meta.keyspace_name]
-cql_types = [CqlType(table_meta.columns[c].cql_type, ks_meta)
-  

[3/3] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-12 Thread stefania
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ab4666c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ab4666c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ab4666c

Branch: refs/heads/trunk
Commit: 2ab4666cd396aa58d313f16f821d5b81025a6a7d
Parents: 9f75e70 a041908
Author: Stefania Alborghetti 
Authored: Thu Oct 13 09:26:53 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Oct 13 09:26:53 2016 +0800

--
 CHANGES.txt |   1 +
 bin/cqlsh.py|  30 +--
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 0 -> 252036 bytes
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 ++
 4 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ab4666c/CHANGES.txt
--
diff --cc CHANGES.txt
index f001d57,f0df0e6..022166a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,11 -1,5 +1,12 @@@
 +4.0
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 +
 +
  3.10
+  * cqlsh fails to format collections when using aliases (CASSANDRA-11534)
   * Check for hash conflicts in prepared statements (CASSANDRA-12733)
   * Exit query parsing upon first error (CASSANDRA-12598)
   * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)



[jira] [Commented] (CASSANDRA-5988) Make hint TTL customizable

2016-10-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570311#comment-15570311
 ] 

sankalp kohli commented on CASSANDRA-5988:
--

Without hintTTL, if we replay data older than GC grace, that will bring back 
data right? If it is not there in 3.0, it should be fixed as Major if not 
blocker? 

> Make hint TTL customizable
> --
>
> Key: CASSANDRA-5988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5988
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Oleg Kibirev
>Assignee: Vishy Kasar
>  Labels: patch
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 5988.txt
>
>
> Currently time to live for stored hints is hardcoded to be gc_grace_seconds. 
> This causes problems for applications using backdated deletes as a form of 
> optimistic locking. Hints for updates made to the same data on which delete 
> was attempted can persist for days, making it impossible to determine if 
> delete succeeded by doing read(ALL) after a reasonable delay. We need a way 
> to explicitly configure hint TTL, either through schema parameter or through 
> a yaml file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12182) redundant StatusLogger print out when both dropped message and long GC event happen

2016-10-12 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570201#comment-15570201
 ] 

Kurt Greaves commented on CASSANDRA-12182:
--

Sorry that was my misunderstanding, I didn't realise you were just targetting 
"duplicate" entries. In that case that makes sense. I suppose a cooldown period 
for StatusLogger messages is what you're looking for then? In that case I'd 
probably say something like restricting messages to once every 10 seconds or so 
would be reasonable. Just my 2c.

> redundant StatusLogger print out when both dropped message and long GC event 
> happen
> ---
>
> Key: CASSANDRA-12182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Priority: Minor
>  Labels: lhf
>
> I was stress testing a C* 3.0 environment and it appears that when the CPU is 
> running low, HINT and MUTATION messages will start to get dropped, and the GC 
> thread can also get some really long-running GC, and I'd get some redundant 
> log entries in system.log like the following:
> {noformat}
> WARN  [Service Thread] 2016-07-12 22:48:45,748  GCInspector.java:282 - G1 
> Young Generation GC in 522ms.  G1 Eden Space: 68157440 -> 0; G1 Old Gen: 
> 3376113224 -> 3468387912; G1 Survivor Space: 24117248 -> 0; 
> INFO  [Service Thread] 2016-07-12 22:48:45,763  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,775  MessagingService.java:983 - 
> MUTATION messages were dropped in last 5000 ms: 419 for internal timeout and 
> 0 for cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  MessagingService.java:983 - 
> HINT messages were dropped in last 5000 ms: 89 for internal timeout and 0 for 
> cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> MutationStage32  4194   32997234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,799  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,800  StatusLogger.java:56 - 
> MutationStage32  4363   32997333 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> RequestResponseStage  0 0   11094437 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,803  StatusLogger.java:56 - 
> RequestResponseStage  4 0   11094509 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,807  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,808  StatusLogger.java:56 - 
> CounterMutationStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,809  StatusLogger.java:56 - 
> MiscStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,809  StatusLogger.java:56 - 
> CompactionExecutor262   1234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,810  StatusLogger.java:56 - 
> MemtableReclaimMemory 0 0 79 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,810  StatusLogger.java:56 - 
> PendingRangeCalculator0 0  3 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,819  StatusLogger.java:56 - 
> GossipStage   0 0   5214 0
>  0
> INFO  

[jira] [Commented] (CASSANDRA-12182) redundant StatusLogger print out when both dropped message and long GC event happen

2016-10-12 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570188#comment-15570188
 ] 

Wei Deng commented on CASSANDRA-12182:
--

[~KurtG] Personally I don't have much issue with the current logging level for 
StatusLogger. When the node is suffering, either because it's dropping 
non-TRACE messages, or because it's exceeding the 1sec gc_warn_threshold_in_ms 
threshold, I'd like to see StatusLogger to give me more information for 
post-mortem analysis and I don't want to change to DEBUG level to see it. Note 
"post-mortem" is the key here as you won't know when this will happen and if 
you have to switch to DEBUG level to see the message likely it will be too late.

bq. I think replacing the log messages with StatusLogger is busy would somewhat 
defeat the purpose.
Can you elaborate why you think avoiding duplicate StatusLogger printing 
"defeat the purpose"? The StatusLogger usually only takes 100-200ms to finish 
printing its state. If at the time StatusLogger is printing, another 
StatusLogger gets triggered and print again, it mostly just adds duplicate 
information which makes log messages crowded without adding any more useful 
insight.

> redundant StatusLogger print out when both dropped message and long GC event 
> happen
> ---
>
> Key: CASSANDRA-12182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Priority: Minor
>  Labels: lhf
>
> I was stress testing a C* 3.0 environment and it appears that when the CPU is 
> running low, HINT and MUTATION messages will start to get dropped, and the GC 
> thread can also get some really long-running GC, and I'd get some redundant 
> log entries in system.log like the following:
> {noformat}
> WARN  [Service Thread] 2016-07-12 22:48:45,748  GCInspector.java:282 - G1 
> Young Generation GC in 522ms.  G1 Eden Space: 68157440 -> 0; G1 Old Gen: 
> 3376113224 -> 3468387912; G1 Survivor Space: 24117248 -> 0; 
> INFO  [Service Thread] 2016-07-12 22:48:45,763  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,775  MessagingService.java:983 - 
> MUTATION messages were dropped in last 5000 ms: 419 for internal timeout and 
> 0 for cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  MessagingService.java:983 - 
> HINT messages were dropped in last 5000 ms: 89 for internal timeout and 0 for 
> cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> MutationStage32  4194   32997234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,799  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,800  StatusLogger.java:56 - 
> MutationStage32  4363   32997333 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> RequestResponseStage  0 0   11094437 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,803  StatusLogger.java:56 - 
> RequestResponseStage  4 0   11094509 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,807  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,808  StatusLogger.java:56 - 
> CounterMutationStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,809  StatusLogger.java:56 - 
> MiscStage 0 0  0 0
>  0
> INFO  

[jira] [Comment Edited] (CASSANDRA-12182) redundant StatusLogger print out when both dropped message and long GC event happen

2016-10-12 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567694#comment-15567694
 ] 

Kurt Greaves edited comment on CASSANDRA-12182 at 10/12/16 10:42 PM:
-

You can set the logging level for the statuslogger to warn to avoid those error 
messages.
{code}nodetool setlogginglevel org.apache.cassandra.utils.StatusLogger 
WARN{code}

or you can set the equivalent in logback.xml. Maybe INFO is noisy but I think 
replacing the log messages with StatusLogger is busy would somewhat defeat the 
purpose. 

-edit: Having said that, maybe INFO is too high, and this detail needs to drop 
to debug. From experience managing a lot of clusters the actual GCInspector 
message is the useful output in this case. However StatusLogger is also 
triggered from other events. Although those are all paired with other log 
messages, which as in this case, are probably more useful than the statuslogger 
output. StatusLogger seems to me to be for much more specific debugging of a 
problem.


was (Author: kurtg):
You can set the logging level for the statuslogger to warn to avoid those error 
messages.
{code}nodetool setlogginglevel org.apache.cassandra.utils.StatusLogger 
WARN{code}

or you can set the equivalent in logback.xml. Maybe INFO is noisy but I think 
replacing the log messages with StatusLogger is busy would somewhat defeat the 
purpose.

> redundant StatusLogger print out when both dropped message and long GC event 
> happen
> ---
>
> Key: CASSANDRA-12182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Priority: Minor
>  Labels: lhf
>
> I was stress testing a C* 3.0 environment and it appears that when the CPU is 
> running low, HINT and MUTATION messages will start to get dropped, and the GC 
> thread can also get some really long-running GC, and I'd get some redundant 
> log entries in system.log like the following:
> {noformat}
> WARN  [Service Thread] 2016-07-12 22:48:45,748  GCInspector.java:282 - G1 
> Young Generation GC in 522ms.  G1 Eden Space: 68157440 -> 0; G1 Old Gen: 
> 3376113224 -> 3468387912; G1 Survivor Space: 24117248 -> 0; 
> INFO  [Service Thread] 2016-07-12 22:48:45,763  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,775  MessagingService.java:983 - 
> MUTATION messages were dropped in last 5000 ms: 419 for internal timeout and 
> 0 for cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  MessagingService.java:983 - 
> HINT messages were dropped in last 5000 ms: 89 for internal timeout and 0 for 
> cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> MutationStage32  4194   32997234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,799  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,800  StatusLogger.java:56 - 
> MutationStage32  4363   32997333 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> RequestResponseStage  0 0   11094437 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,803  StatusLogger.java:56 - 
> RequestResponseStage  4 0   11094509 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,807  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,808  StatusLogger.java:56 - 
> CounterMutationStage  0 0  0 0   

[jira] [Comment Edited] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-12 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570067#comment-15570067
 ] 

Kurt Greaves edited comment on CASSANDRA-12296 at 10/12/16 10:28 PM:
-

Don't insert your foot there too soon. Bootstrap is fine however you still get 
an incorrect error message if you try and rebuild from the same DC using NTS 
from a DC that doesn't contain the keyspace. When I was testing this case 
apparently the error got masked by another keyspace which makes me think there 
may be another bug here (I will investigate and write up another JIRA if I 
manage to figure it out) i.e:
{code}
 keyspace_name  | durable_writes | replication
++-
   test |   True |   {'class': 
'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc1': '1'}

ccm node2 nodetool rebuild test dc2
 nodetool: Unable to find sufficient sources for streaming range 
(-1977532406384460074,-1976661853362798275] in keyspace test with RF=1. 
Consider using NetworkTopologyStrategy for this keyspace.
{code}

In this case, node2 is in dc1, and although test isn't replicated to dc2 it 
still gets to that error message.

I suppose alternatives could be:
1. Make the error more generic (e.g: 'Ensure this keyspace has replicas in the 
source datacentre')
2. Catch this case separately and warn that you can't rebuild from a DC that 
has no replicas.

Something more generic is probably OK and with a bit of thought the user should 
come to a conclusion on how to deal with the issue like increase RF or change 
to NTS - As long as we can be sure this only occurs from rebuilds. I can't see 
any path that leads to here from anywhere other than a rebuild however I 
haven't completely ruled out bootstrap yet.

What do you think [~jjirsa]?


was (Author: kurtg):
Don't insert your foot there too soon. Bootstrap is fine however you still get 
an incorrect error message if you try and rebuild from the same DC using NTS 
from a DC that doesn't contain the keyspace. When I was testing this case 
apparently the error got masked by another keyspace which makes me think there 
may be another bug here (details for which are below) i.e:
{code}
 keyspace_name  | durable_writes | replication
++-
   test |   True |   {'class': 
'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc1': '1'}

ccm node2 nodetool rebuild test dc2
 nodetool: Unable to find sufficient sources for streaming range 
(-1977532406384460074,-1976661853362798275] in keyspace test with RF=1. 
Consider using NetworkTopologyStrategy for this keyspace.
{code}

In this case, node2 is in dc1, and although test isn't replicated to dc2 it 
still gets to that error message.

I suppose alternatives could be:
1. Make the error more generic (e.g: 'Ensure this keyspace has replicas in the 
source datacentre')
2. Catch this case separately and warn that you can't rebuild from a DC that 
has no replicas.

Something more generic is probably OK and with a bit of thought the user should 
come to a conclusion on how to deal with the issue like increase RF or change 
to NTS - As long as we can be sure this only occurs from rebuilds. I can't see 
any path that leads to here from anywhere other than a rebuild however I 
haven't completely ruled out bootstrap yet.

What do you think [~jjirsa]?

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up 

[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-12 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570067#comment-15570067
 ] 

Kurt Greaves commented on CASSANDRA-12296:
--

Don't insert your foot there too soon. Bootstrap is fine however you still get 
an incorrect error message if you try and rebuild from the same DC using NTS 
from a DC that doesn't contain the keyspace. When I was testing this case 
apparently the error got masked by another keyspace which makes me think there 
may be another bug here (details for which are below) i.e:
{code}
 keyspace_name  | durable_writes | replication
++-
   test |   True |   {'class': 
'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc1': '1'}

ccm node2 nodetool rebuild test dc2
 nodetool: Unable to find sufficient sources for streaming range 
(-1977532406384460074,-1976661853362798275] in keyspace test with RF=1. 
Consider using NetworkTopologyStrategy for this keyspace.
{code}

In this case, node2 is in dc1, and although test isn't replicated to dc2 it 
still gets to that error message.

I suppose alternatives could be:
1. Make the error more generic (e.g: 'Ensure this keyspace has replicas in the 
source datacentre')
2. Catch this case separately and warn that you can't rebuild from a DC that 
has no replicas.

Something more generic is probably OK and with a bit of thought the user should 
come to a conclusion on how to deal with the issue like increase RF or change 
to NTS - As long as we can be sure this only occurs from rebuilds. I can't see 
any path that leads to here from anywhere other than a rebuild however I 
haven't completely ruled out bootstrap yet.

What do you think [~jjirsa]?

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12778) Tombstones not being deleted when only_purge_repaired_tombstones is enabled

2016-10-12 Thread Sharvanath Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569810#comment-15569810
 ] 

Sharvanath Pathak commented on CASSANDRA-12778:
---

[~krummas] can you take a look?

> Tombstones not being deleted when only_purge_repaired_tombstones is enabled
> ---
>
> Key: CASSANDRA-12778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Arvind Nithrakashyap
>Assignee: Marcus Eriksson
>Priority: Critical
>
> When we use only_purge_repaired_tombstones for compaction, we noticed that 
> tombstones are no longer being deleted.
> {noformat}compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
> 'only_purge_repaired_tombstones': 'true'}{noformat}
> The root cause for this seems to be caused by the fact that repair itself 
> issues a flush which in turn leads to a new sstable being created (which is 
> not in the repair set). It looks like we do have some old data in this 
> sstable because of this, only tombstones older than that timestamp are 
> getting deleted even though many more keys have been repaired. 
> Fundamentally it looks like flush and repair can race with each other and 
> with leveled compaction, the flush creates a new sstable at level 0 and 
> removes the older sstable (the one that is picked for repair). Since repair 
> itself seems to issue multiple flushes, the level 0 sstable never gets 
> repaired and hence tombstones never get deleted. 
> We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12778) Tombstones not being deleted when only_purge_repaired_tombstones is enabled

2016-10-12 Thread Sharvanath Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sharvanath Pathak updated CASSANDRA-12778:
--
Assignee: Marcus Eriksson

> Tombstones not being deleted when only_purge_repaired_tombstones is enabled
> ---
>
> Key: CASSANDRA-12778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Arvind Nithrakashyap
>Assignee: Marcus Eriksson
>Priority: Critical
>
> When we use only_purge_repaired_tombstones for compaction, we noticed that 
> tombstones are no longer being deleted.
> {noformat}compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
> 'only_purge_repaired_tombstones': 'true'}{noformat}
> The root cause for this seems to be caused by the fact that repair itself 
> issues a flush which in turn leads to a new sstable being created (which is 
> not in the repair set). It looks like we do have some old data in this 
> sstable because of this, only tombstones older than that timestamp are 
> getting deleted even though many more keys have been repaired. 
> Fundamentally it looks like flush and repair can race with each other and 
> with leveled compaction, the flush creates a new sstable at level 0 and 
> removes the older sstable (the one that is picked for repair). Since repair 
> itself seems to issue multiple flushes, the level 0 sstable never gets 
> repaired and hence tombstones never get deleted. 
> We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9320) test-burn target should be run occasionally

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-9320.
---
Resolution: Fixed

> test-burn target should be run occasionally
> ---
>
> Key: CASSANDRA-9320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9320
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.x
>
>
> The tests are all concurrency tests right now so they need to run on the 
> largest  # of cores we have available. The tests are not configured to run 
> very long right now, but the intent is that they run for longer periods (days 
> even).
> They aren't described as high value right now because the code under test 
> hasn't change since first introduced so we can defer setting this job up 
> until higher priority things are done.
> I think we should still run them at some low frequency so they don't rot or 
> some change doesn't sneak in that effects them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9008) Provide developers with a Jenkins page with their branch test results

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-9008.
---
Resolution: Fixed

> Provide developers with a Jenkins page with their branch test results
> -
>
> Key: CASSANDRA-9008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9008
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
>  Labels: monthly-release
>
> For every developer's github fork create  a page with a list of branches with 
> an aggregate go/no go for all the testing jobs on the branch. 
> For each branch provide a page listing the job(e.g. dtests, utests) that ran 
> on that branch and the result.
> Allow developers to exclude branches by include notest in the name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8996) dtests should pass on trunk

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8996.
---
Resolution: Fixed

> dtests should pass on trunk
> ---
>
> Key: CASSANDRA-8996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8996
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
>  Labels: monthly-release
>
> Not having the dtests reporting that they pass make it non-obvious when a new 
> one breaks.
> Either fix the tests so that they pass or exclude the known failures from 
> success criteria.
> For excluded tests, make sure there is a JIRA covering them so we can make 
> sure someone is following up shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8997) Bootstrap dtests so they pass on trunk

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8997.
---
Resolution: Fixed

> Bootstrap dtests so they pass on trunk
> --
>
> Key: CASSANDRA-8997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8997
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
>  Labels: monthly-release
>
> Get to passing as soon as possible by excluding failing tests so that we can 
> have a history of successful runs and track new regressions and make it 
> obvious when there are flapping tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9093) testall should pass on trunk

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-9093.
---
Resolution: Fixed

> testall should pass on trunk
> 
>
> Key: CASSANDRA-9093
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9093
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
> Attachments: trunk_testall.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9000) utests should pass on trunk

2016-10-12 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-9000.
---
Resolution: Fixed

> utests should pass on trunk
> ---
>
> Key: CASSANDRA-9000
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9000
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Michael Shuler
>  Labels: monthly-release
>
> Not having the dtests reporting that they pass make it non-obvious when a new 
> one breaks.
> Either fix the tests so that they pass or exclude the known failures from 
> success criteria.
> For excluded tests, make sure there is a JIRA covering them so we can make 
> sure someone is following up shortly.
> Looking at http://cassci.datastax.com/job/CTOOL_trunk_utest/ it isn't clear 
> if the issue is flapping tests or things being broken and then fixed. Without 
> test history from 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12422) Clean up the SSTableReader#getScanner API

2016-10-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12422:

Status: Ready to Commit  (was: Patch Available)

> Clean up the SSTableReader#getScanner API
> -
>
> Key: CASSANDRA-12422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12422
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Anthony Grasso
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
> Attachments: CASSANDRA-12422-Removed-rate-limiter-parameter.patch
>
>
> After CASSANDRA-12366 we only call the various getScanner methods in 
> SSTableReader with null as a rate limiter - we should remove this parameter.
> Targeting 4.0 as we probably shouldn't change the API in 3.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12422) Clean up the SSTableReader#getScanner API

2016-10-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12422:

Reviewer: Dave Brosius

> Clean up the SSTableReader#getScanner API
> -
>
> Key: CASSANDRA-12422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12422
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Anthony Grasso
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
> Attachments: CASSANDRA-12422-Removed-rate-limiter-parameter.patch
>
>
> After CASSANDRA-12366 we only call the various getScanner methods in 
> SSTableReader with null as a rate limiter - we should remove this parameter.
> Targeting 4.0 as we probably shouldn't change the API in 3.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12660) NIODataInputStreamTest - Function with an infinite loop

2016-10-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12660:

Reviewer: Joel Knighton

> NIODataInputStreamTest - Function with an infinite loop
> ---
>
> Key: CASSANDRA-12660
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12660
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Marianne Linhares Monteiro
>Assignee: Arunkumar M
>Priority: Trivial
>  Labels: easyfix, low-hanging-fruit
> Fix For: 3.x
>
> Attachments: 12660-3.9.txt
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Function with an infinite loop and not needed.
> https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/io/util/NIODataInputStreamTest.java
>  - lines 97-101
> isOpen()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12606) CQLSSTableWriter unable to use blob conversion functions

2016-10-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12606:

Reviewer: Joel Knighton

> CQLSSTableWriter unable to use blob conversion functions
> 
>
> Key: CASSANDRA-12606
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12606
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Tools
>Reporter: Mark Reddy
>Assignee: Alex Petrov
>Priority: Minor
>
> Attempting to use blob conversion functions e.g. textAsBlob, from 3.0 - 3.7 
> results in:
> {noformat}
> Exception in thread "main" 
> org.apache.cassandra.exceptions.InvalidRequestException: Unknown function 
> textasblob called
>   at 
> org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:136)
>   at 
> org.apache.cassandra.cql3.Operation$SetValue.prepare(Operation.java:163)
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:173)
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:785)
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:771)
>   at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.prepareInsert(CQLSSTableWriter.java:567)
>   at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.build(CQLSSTableWriter.java:510)
> {noformat}
> The following snippet will reproduce the issue
> {code}
> String table = String.format("%s.%s", "test_ks", "test_table");
> String schema = String.format("CREATE TABLE %s (test_text text, test_blob 
> blob, PRIMARY KEY(test_text));", table);
> String insertStatement = String.format("INSERT INTO %s (test_text, test_blob) 
> VALUES (?, textAsBlob(?))", table);
> File tempDir = Files.createTempDirectory("tempDir").toFile();
> CQLSSTableWriter sstableWriter = CQLSSTableWriter.builder()
> .forTable(schema)
> .using(insertStatement)
> .inDirectory(tempDir)
> .build();
> {code}
> This is caused in FunctionResolver.get(...) when 
> candidates.addAll(Schema.instance.getFunctions(name.asNativeFunction())); is 
> called, as there is no system keyspace initialised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-12 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569562#comment-15569562
 ] 

Joel Knighton commented on CASSANDRA-12761:
---

+1. Thanks!

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-12 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12761:
--
Status: Ready to Commit  (was: Patch Available)

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11117) ColUpdateTimeDeltaHistogram histogram overflow

2016-10-12 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-7:
--
Status: Patch Available  (was: Awaiting Feedback)

I've updated the branch above to adopt the approach you described. I also 
slightly shortened the unit test since the minimum constraint unifies the two 
cases being tested. New CI runs have completed on the links above and look 
clean relative to upstream.

> ColUpdateTimeDeltaHistogram histogram overflow
> --
>
> Key: CASSANDRA-7
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x, 4.x
>
>
> {code}
> getting attribute Mean of 
> org.apache.cassandra.metrics:type=ColumnFamily,name=ColUpdateTimeDeltaHistogram
>  threw an exceptionjavax.management.RuntimeMBeanException: 
> java.lang.IllegalStateException: Unable to compute ceiling for max when 
> histogram overflowed
> {code}
> Although the fact that this histogram has 164 buckets already, I wonder if 
> there is something weird with the computation thats causing this to be so 
> large? It appears to be coming from updates to system.local
> {code}
> org.apache.cassandra.metrics:type=Table,keyspace=system,scope=local,name=ColUpdateTimeDeltaHistogram
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11117) ColUpdateTimeDeltaHistogram histogram overflow

2016-10-12 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569506#comment-15569506
 ] 

Joel Knighton edited comment on CASSANDRA-7 at 10/12/16 6:32 PM:
-

I've updated the branch above to adopt the approach you described. I also 
slightly shortened the unit test since the minimum constraint unifies the two 
cases being tested. New CI runs have completed on the links above and look 
clean relative to upstream.

EDIT: I should note that the 2.2 branch and 3.0 branch each need to be applied 
as their own patch. The 3.0 branch should merge forward cleanly.


was (Author: jkni):
I've updated the branch above to adopt the approach you described. I also 
slightly shortened the unit test since the minimum constraint unifies the two 
cases being tested. New CI runs have completed on the links above and look 
clean relative to upstream.

> ColUpdateTimeDeltaHistogram histogram overflow
> --
>
> Key: CASSANDRA-7
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x, 4.x
>
>
> {code}
> getting attribute Mean of 
> org.apache.cassandra.metrics:type=ColumnFamily,name=ColUpdateTimeDeltaHistogram
>  threw an exceptionjavax.management.RuntimeMBeanException: 
> java.lang.IllegalStateException: Unable to compute ceiling for max when 
> histogram overflowed
> {code}
> Although the fact that this histogram has 164 buckets already, I wonder if 
> there is something weird with the computation thats causing this to be so 
> large? It appears to be coming from updates to system.local
> {code}
> org.apache.cassandra.metrics:type=Table,keyspace=system,scope=local,name=ColUpdateTimeDeltaHistogram
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11117) ColUpdateTimeDeltaHistogram histogram overflow

2016-10-12 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-7:
--
Fix Version/s: 4.x

> ColUpdateTimeDeltaHistogram histogram overflow
> --
>
> Key: CASSANDRA-7
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x, 4.x
>
>
> {code}
> getting attribute Mean of 
> org.apache.cassandra.metrics:type=ColumnFamily,name=ColUpdateTimeDeltaHistogram
>  threw an exceptionjavax.management.RuntimeMBeanException: 
> java.lang.IllegalStateException: Unable to compute ceiling for max when 
> histogram overflowed
> {code}
> Although the fact that this histogram has 164 buckets already, I wonder if 
> there is something weird with the computation thats causing this to be so 
> large? It appears to be coming from updates to system.local
> {code}
> org.apache.cassandra.metrics:type=Table,keyspace=system,scope=local,name=ColUpdateTimeDeltaHistogram
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12778) Tombstones not being deleted when only_purge_repaired_tombstones is enabled

2016-10-12 Thread Arvind Nithrakashyap (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvind Nithrakashyap updated CASSANDRA-12778:
-
Description: 
When we use only_purge_repaired_tombstones for compaction, we noticed that 
tombstones are no longer being deleted.

{noformat}compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'only_purge_repaired_tombstones': 'true'}{noformat}

The root cause for this seems to be caused by the fact that repair itself 
issues a flush which in turn leads to a new sstable being created (which is not 
in the repair set). It looks like we do have some old data in this sstable 
because of this, only tombstones older than that timestamp are getting deleted 
even though many more keys have been repaired. 

Fundamentally it looks like flush and repair can race with each other and with 
leveled compaction, the flush creates a new sstable at level 0 and removes the 
older sstable (the one that is picked for repair). Since repair itself seems to 
issue multiple flushes, the level 0 sstable never gets repaired and hence 
tombstones never get deleted. 

We have already included the fix for CASSANDRA-12703 while testing. 

  was:
When we use only_purge_repaired_tombstones for compaction, we noticed that 
tombstones are no longer being deleted.

{noformat}compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'only_purge_repaired_tombstones': 'true'}{noformat}

The root cause for this seems to be caused by the fact that repair itself 
issues a flush which in turn leads to a new sstable being created (which is not 
in the repair set). It looks like we do have some old data in this sstable 
because of only tombstones older than that timestamp are getting deleted even 
though many more keys have been repaired. 

Fundamentally it looks like flush and repair can race with each other and with 
leveled compaction, the flush creates a new sstable at level 0 and removes the 
older sstable (the one that is picked for repair). Since repair itself seems to 
issue multiple flushes, the level 0 sstable never gets repaired and hence 
tombstones never get deleted. 

We have already included the fix for CASSANDRA-12703 while testing. 


> Tombstones not being deleted when only_purge_repaired_tombstones is enabled
> ---
>
> Key: CASSANDRA-12778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Arvind Nithrakashyap
>Priority: Critical
>
> When we use only_purge_repaired_tombstones for compaction, we noticed that 
> tombstones are no longer being deleted.
> {noformat}compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
> 'only_purge_repaired_tombstones': 'true'}{noformat}
> The root cause for this seems to be caused by the fact that repair itself 
> issues a flush which in turn leads to a new sstable being created (which is 
> not in the repair set). It looks like we do have some old data in this 
> sstable because of this, only tombstones older than that timestamp are 
> getting deleted even though many more keys have been repaired. 
> Fundamentally it looks like flush and repair can race with each other and 
> with leveled compaction, the flush creates a new sstable at level 0 and 
> removes the older sstable (the one that is picked for repair). Since repair 
> itself seems to issue multiple flushes, the level 0 sstable never gets 
> repaired and hence tombstones never get deleted. 
> We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10446) Run repair with down replicas

2016-10-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-10446:

Fix Version/s: (was: 3.x)
   4.0
   Status: Patch Available  (was: Open)

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-10-12 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569398#comment-15569398
 ] 

Blake Eggleston commented on CASSANDRA-10446:
-

| [trunk|https://github.com/bdeggleston/cassandra/commits/10446-trunk] | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-10446-trunk-dtest/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-10446-trunk-testall/]|

[associated dtest|https://github.com/bdeggleston/cassandra-dtest/commits/10446]

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10446) Run repair with down replicas

2016-10-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston reassigned CASSANDRA-10446:
---

Assignee: Blake Eggleston

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569354#comment-15569354
 ] 

Ben Manes commented on CASSANDRA-10855:
---

Thanks. Fixed by updating the build.xml and removing the old jar. That explodes 
the patch due to the jars checked into the repo. The linked PR mirrors it.

I'd also like to pair on evaluating TinyLFU for OHC, but would like to see this 
go in before we jump into that.

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch, CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Ben Manes (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Manes updated CASSANDRA-10855:
--
Attachment: CASSANDRA-10855.patch

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch, CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-12 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569292#comment-15569292
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Morning update :) The stress load has continued in all partitions since the 
last update. The large partitions have grown to ~21GB. Latencies are still 
unchanged for both reads and writes in all percentiles!! Onwards to the next 
milestone, 50GB! I also doubled the read and write load around 10 hours ago to 
4k reads/sec and 10k writes/sec to grow the partitions faster.

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_read_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12779) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy resolved CASSANDRA-12779.
---
Resolution: Duplicate

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test
> --
>
> Key: CASSANDRA-12779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12779
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/13/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x/limit_multiget_test
> {code}
> Error Message
> Expected [[48, 'http://foo.com', 42]] from SELECT * FROM clicks WHERE userid 
> IN (48, 2) LIMIT 1, but got [[2, u'http://foo.com', 42]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 362, in limit_multiget_test
> assert_one(cursor, "SELECT * FROM clicks WHERE userid IN (48, 2) LIMIT 
> 1", [48, 'http://foo.com', 42])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12779) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy reopened CASSANDRA-12779:
---

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test
> --
>
> Key: CASSANDRA-12779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12779
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/13/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x/limit_multiget_test
> {code}
> Error Message
> Expected [[48, 'http://foo.com', 42]] from SELECT * FROM clicks WHERE userid 
> IN (48, 2) LIMIT 1, but got [[2, u'http://foo.com', 42]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 362, in limit_multiget_test
> assert_one(cursor, "SELECT * FROM clicks WHERE userid IN (48, 2) LIMIT 
> 1", [48, 'http://foo.com', 42])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12782) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy reopened CASSANDRA-12782:
---

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-12782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12782
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest/6/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {code}
> Error Message
> Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
> with non-zero status; exit status: 1; 
> stdout: 
> usage: Usage: sstablemetadata [--gc_grace_seconds n] 
> Dump contents of given SSTable to standard output in JSON format.
> --gc_grace_secondsThe gc_grace_seconds to use when
>calculating droppable tombstones
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in 
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
> run_sstablemetadata
> return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
> column_family: {}".format(keyspace, column_families))
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
> handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12782) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy resolved CASSANDRA-12782.
---
Resolution: Duplicate

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-12782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12782
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest/6/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {code}
> Error Message
> Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
> with non-zero status; exit status: 1; 
> stdout: 
> usage: Usage: sstablemetadata [--gc_grace_seconds n] 
> Dump contents of given SSTable to standard output in JSON format.
> --gc_grace_secondsThe gc_grace_seconds to use when
>calculating droppable tombstones
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in 
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
> run_sstablemetadata
> return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
> column_family: {}".format(keyspace, column_families))
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
> handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12268) Make MV Index creation robust for wide referent rows

2016-10-12 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-12268:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

The tests looked like they were unrelated (all passed locally).

Committed as 
[76f1750|https://git1-us-west.apache.org/repos/asf/cassandra/?p=cassandra.git;a=commit;h=76f175028544fe20f30ae873f23cba559097cef1].

> Make MV Index creation robust for wide referent rows
> 
>
> Key: CASSANDRA-12268
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12268
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Shook
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
> Attachments: 12268.py
>
>
> When creating an index for a materialized view for extant data, heap pressure 
> is very dependent on the cardinality of of rows associated with each index 
> value. With the way that per-index value rows are created within the index, 
> this can cause unbounded heap pressure, which can cause OOM. This appears to 
> be a side-effect of how each index row is applied atomically as with batches.
> The commit logs can accumulate enough during the process to prevent the node 
> from being restarted. Given that this occurs during global index creation, 
> this can happen on multiple nodes, making stable recovery of a node set 
> difficult, as co-replicas become unavailable to assist in back-filling data 
> from commitlogs.
> While it is understandable that you want to avoid having relatively wide rows 
>  even in materialized views, this represents a particularly difficult 
> scenario for triage.
> The basic recommendation for improving this is to sub-group the index 
> creation into smaller chunks internally, providing a maximal bound against 
> the heap pressure when it is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-12 Thread carl
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/541d8370
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/541d8370
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/541d8370

Branch: refs/heads/trunk
Commit: 541d837070630cd39a8c57e414a8e777f6791ae1
Parents: f1b742e 76f1750
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:30:11 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:30:11 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 11 ++-
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 93 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/541d8370/CHANGES.txt
--
diff --cc CHANGES.txt
index e733214,13800da..1ade69f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,84 -1,5 +1,85 @@@
 -3.0.10
 +3.10
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697)
 + * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
 + * Tune compaction thread count via nodetool (CASSANDRA-12248)
 + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
 + * Include repair session IDs in repair start message (CASSANDRA-12532)
 + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039)
 + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667)
 + * Support optional backpressure strategies at the coordinator 
(CASSANDRA-9318)
 + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647)
 + * Fix cassandra-stress graphing (CASSANDRA-12237)
 + * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366, CASSANDRA-12717)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT 

[2/6] cassandra git commit: Split materialized view mutations on build to prevent OOM

2016-10-12 Thread carl
Split materialized view mutations on build to prevent OOM

Patch by Carl Yeksigian; reviewed by Jake Luciani for CASSANDRA-12268


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76f17502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76f17502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76f17502

Branch: refs/heads/cassandra-3.X
Commit: 76f175028544fe20f30ae873f23cba559097cef1
Parents: d5f2d0f
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:24:19 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:24:19 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 14 +--
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 95 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d797288..13800da 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
  * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/src/java/org/apache/cassandra/db/view/TableViews.java
--
diff --git a/src/java/org/apache/cassandra/db/view/TableViews.java 
b/src/java/org/apache/cassandra/db/view/TableViews.java
index 7feb67c..1a3cbb1 100644
--- a/src/java/org/apache/cassandra/db/view/TableViews.java
+++ b/src/java/org/apache/cassandra/db/view/TableViews.java
@@ -46,7 +46,8 @@ public class TableViews extends AbstractCollection
 private final CFMetaData baseTableMetadata;
 
 // We need this to be thread-safe, but the number of times this is changed 
(when a view is created in the keyspace)
-// massively exceeds the number of time it's read (for every mutation on 
the keyspace), so a copy-on-write list is the best option.
+// is massively exceeded by the number of times it's read (for every 
mutation on the keyspace), so a copy-on-write
+// list is the best option.
 private final List views = new CopyOnWriteArrayList();
 
 public TableViews(CFMetaData baseTableMetadata)
@@ -137,7 +138,7 @@ public class TableViews extends AbstractCollection
  UnfilteredRowIterator existings = 
UnfilteredPartitionIterators.getOnlyElement(command.executeLocally(orderGroup), 
command);
  UnfilteredRowIterator updates = update.unfilteredIterator())
 {
-mutations = generateViewUpdates(views, updates, existings, 
nowInSec);
+mutations = Iterators.getOnlyElement(generateViewUpdates(views, 
updates, existings, nowInSec, false));
 }
 
Keyspace.openAndGetStore(update.metadata()).metric.viewReadTime.update(System.nanoTime()
 - start, TimeUnit.NANOSECONDS);
 
@@ -145,6 +146,7 @@ public class TableViews extends AbstractCollection
 StorageProxy.mutateMV(update.partitionKey().getKey(), mutations, 
writeCommitLog, baseComplete);
 }
 
+
 /**
  * Given some updates on the base table of this object and the existing 
values for the rows affected by that update, generates the
  * mutation to be applied to the provided views.
@@ -159,7 +161,11 @@ public class TableViews extends AbstractCollection
  * @param nowInSec the current time in seconds.
  * @return the mutations to apply to the {@code views}. This can be empty.
  */
-public Collection generateViewUpdates(Collection views, 
UnfilteredRowIterator updates, UnfilteredRowIterator existings, int nowInSec)
+public Iterator generateViewUpdates(Collection 
views,
+  
UnfilteredRowIterator updates,
+  
UnfilteredRowIterator existings,
+  int nowInSec,
+  boolean 
separateUpdates)
 {
 assert updates.metadata().cfId.equals(baseTableMetadata.cfId);
 
@@ -251,18 +257,75 @@ public class TableViews extends AbstractCollection
 addToViewUpdateGenerators(existingRow, 
emptyRow(existingRow.clustering(), updatesDeletion.currentDeletion()), 
generators, nowInSec);
 }
 }
-while 

[jira] [Created] (CASSANDRA-12783) Break up large MV mutations to prevent OOMs

2016-10-12 Thread Carl Yeksigian (JIRA)
Carl Yeksigian created CASSANDRA-12783:
--

 Summary: Break up large MV mutations to prevent OOMs
 Key: CASSANDRA-12783
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12783
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian


We only use the code path added in CASSANDRA-12268 for the view builder because 
otherwise we would break the contract of the batchlog, where some mutations may 
be written and pushed out before the whole batch log has been saved.

We would need to ensure that all of the updates make it to the batchlog before 
allowing the batchlog manager to try to replay them, but also before we start 
pushing out updates to the paired replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-12 Thread carl
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9f75e706
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9f75e706
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9f75e706

Branch: refs/heads/trunk
Commit: 9f75e706871aaeac749e45d2dd226b99950ce436
Parents: cd728d2 541d837
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:34:16 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:34:16 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 11 ++-
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 93 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9f75e706/CHANGES.txt
--



[3/6] cassandra git commit: Split materialized view mutations on build to prevent OOM

2016-10-12 Thread carl
Split materialized view mutations on build to prevent OOM

Patch by Carl Yeksigian; reviewed by Jake Luciani for CASSANDRA-12268


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76f17502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76f17502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76f17502

Branch: refs/heads/trunk
Commit: 76f175028544fe20f30ae873f23cba559097cef1
Parents: d5f2d0f
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:24:19 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:24:19 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 14 +--
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 95 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d797288..13800da 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
  * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/src/java/org/apache/cassandra/db/view/TableViews.java
--
diff --git a/src/java/org/apache/cassandra/db/view/TableViews.java 
b/src/java/org/apache/cassandra/db/view/TableViews.java
index 7feb67c..1a3cbb1 100644
--- a/src/java/org/apache/cassandra/db/view/TableViews.java
+++ b/src/java/org/apache/cassandra/db/view/TableViews.java
@@ -46,7 +46,8 @@ public class TableViews extends AbstractCollection
 private final CFMetaData baseTableMetadata;
 
 // We need this to be thread-safe, but the number of times this is changed 
(when a view is created in the keyspace)
-// massively exceeds the number of time it's read (for every mutation on 
the keyspace), so a copy-on-write list is the best option.
+// is massively exceeded by the number of times it's read (for every 
mutation on the keyspace), so a copy-on-write
+// list is the best option.
 private final List views = new CopyOnWriteArrayList();
 
 public TableViews(CFMetaData baseTableMetadata)
@@ -137,7 +138,7 @@ public class TableViews extends AbstractCollection
  UnfilteredRowIterator existings = 
UnfilteredPartitionIterators.getOnlyElement(command.executeLocally(orderGroup), 
command);
  UnfilteredRowIterator updates = update.unfilteredIterator())
 {
-mutations = generateViewUpdates(views, updates, existings, 
nowInSec);
+mutations = Iterators.getOnlyElement(generateViewUpdates(views, 
updates, existings, nowInSec, false));
 }
 
Keyspace.openAndGetStore(update.metadata()).metric.viewReadTime.update(System.nanoTime()
 - start, TimeUnit.NANOSECONDS);
 
@@ -145,6 +146,7 @@ public class TableViews extends AbstractCollection
 StorageProxy.mutateMV(update.partitionKey().getKey(), mutations, 
writeCommitLog, baseComplete);
 }
 
+
 /**
  * Given some updates on the base table of this object and the existing 
values for the rows affected by that update, generates the
  * mutation to be applied to the provided views.
@@ -159,7 +161,11 @@ public class TableViews extends AbstractCollection
  * @param nowInSec the current time in seconds.
  * @return the mutations to apply to the {@code views}. This can be empty.
  */
-public Collection generateViewUpdates(Collection views, 
UnfilteredRowIterator updates, UnfilteredRowIterator existings, int nowInSec)
+public Iterator generateViewUpdates(Collection 
views,
+  
UnfilteredRowIterator updates,
+  
UnfilteredRowIterator existings,
+  int nowInSec,
+  boolean 
separateUpdates)
 {
 assert updates.metadata().cfId.equals(baseTableMetadata.cfId);
 
@@ -251,18 +257,75 @@ public class TableViews extends AbstractCollection
 addToViewUpdateGenerators(existingRow, 
emptyRow(existingRow.clustering(), updatesDeletion.currentDeletion()), 
generators, nowInSec);
 }
 }
-while (updatesIter.hasNext())
+

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-12 Thread carl
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/541d8370
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/541d8370
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/541d8370

Branch: refs/heads/cassandra-3.X
Commit: 541d837070630cd39a8c57e414a8e777f6791ae1
Parents: f1b742e 76f1750
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:30:11 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:30:11 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 11 ++-
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 93 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/541d8370/CHANGES.txt
--
diff --cc CHANGES.txt
index e733214,13800da..1ade69f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,84 -1,5 +1,85 @@@
 -3.0.10
 +3.10
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697)
 + * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
 + * Tune compaction thread count via nodetool (CASSANDRA-12248)
 + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
 + * Include repair session IDs in repair start message (CASSANDRA-12532)
 + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039)
 + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667)
 + * Support optional backpressure strategies at the coordinator 
(CASSANDRA-9318)
 + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647)
 + * Fix cassandra-stress graphing (CASSANDRA-12237)
 + * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366, CASSANDRA-12717)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to 

[1/6] cassandra git commit: Split materialized view mutations on build to prevent OOM

2016-10-12 Thread carl
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 d5f2d0f07 -> 76f175028
  refs/heads/cassandra-3.X f1b742e9d -> 541d83707
  refs/heads/trunk cd728d2e7 -> 9f75e7068


Split materialized view mutations on build to prevent OOM

Patch by Carl Yeksigian; reviewed by Jake Luciani for CASSANDRA-12268


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76f17502
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76f17502
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76f17502

Branch: refs/heads/cassandra-3.0
Commit: 76f175028544fe20f30ae873f23cba559097cef1
Parents: d5f2d0f
Author: Carl Yeksigian 
Authored: Wed Oct 12 12:24:19 2016 -0400
Committer: Carl Yeksigian 
Committed: Wed Oct 12 12:24:19 2016 -0400

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/view/TableViews.java| 91 +---
 .../apache/cassandra/db/view/ViewBuilder.java   | 14 +--
 .../cassandra/db/view/ViewUpdateGenerator.java  |  8 ++
 4 files changed, 95 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d797288..13800da 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
  * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/76f17502/src/java/org/apache/cassandra/db/view/TableViews.java
--
diff --git a/src/java/org/apache/cassandra/db/view/TableViews.java 
b/src/java/org/apache/cassandra/db/view/TableViews.java
index 7feb67c..1a3cbb1 100644
--- a/src/java/org/apache/cassandra/db/view/TableViews.java
+++ b/src/java/org/apache/cassandra/db/view/TableViews.java
@@ -46,7 +46,8 @@ public class TableViews extends AbstractCollection
 private final CFMetaData baseTableMetadata;
 
 // We need this to be thread-safe, but the number of times this is changed 
(when a view is created in the keyspace)
-// massively exceeds the number of time it's read (for every mutation on 
the keyspace), so a copy-on-write list is the best option.
+// is massively exceeded by the number of times it's read (for every 
mutation on the keyspace), so a copy-on-write
+// list is the best option.
 private final List views = new CopyOnWriteArrayList();
 
 public TableViews(CFMetaData baseTableMetadata)
@@ -137,7 +138,7 @@ public class TableViews extends AbstractCollection
  UnfilteredRowIterator existings = 
UnfilteredPartitionIterators.getOnlyElement(command.executeLocally(orderGroup), 
command);
  UnfilteredRowIterator updates = update.unfilteredIterator())
 {
-mutations = generateViewUpdates(views, updates, existings, 
nowInSec);
+mutations = Iterators.getOnlyElement(generateViewUpdates(views, 
updates, existings, nowInSec, false));
 }
 
Keyspace.openAndGetStore(update.metadata()).metric.viewReadTime.update(System.nanoTime()
 - start, TimeUnit.NANOSECONDS);
 
@@ -145,6 +146,7 @@ public class TableViews extends AbstractCollection
 StorageProxy.mutateMV(update.partitionKey().getKey(), mutations, 
writeCommitLog, baseComplete);
 }
 
+
 /**
  * Given some updates on the base table of this object and the existing 
values for the rows affected by that update, generates the
  * mutation to be applied to the provided views.
@@ -159,7 +161,11 @@ public class TableViews extends AbstractCollection
  * @param nowInSec the current time in seconds.
  * @return the mutations to apply to the {@code views}. This can be empty.
  */
-public Collection generateViewUpdates(Collection views, 
UnfilteredRowIterator updates, UnfilteredRowIterator existings, int nowInSec)
+public Iterator generateViewUpdates(Collection 
views,
+  
UnfilteredRowIterator updates,
+  
UnfilteredRowIterator existings,
+  int nowInSec,
+  boolean 
separateUpdates)
 {
 assert updates.metadata().cfId.equals(baseTableMetadata.cfId);
 
@@ -251,18 +257,75 @@ public class TableViews extends AbstractCollection
 

[jira] [Resolved] (CASSANDRA-12782) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy resolved CASSANDRA-12782.
---
Resolution: Fixed

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-12782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12782
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest/6/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {code}
> Error Message
> Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
> with non-zero status; exit status: 1; 
> stdout: 
> usage: Usage: sstablemetadata [--gc_grace_seconds n] 
> Dump contents of given SSTable to standard output in JSON format.
> --gc_grace_secondsThe gc_grace_seconds to use when
>calculating droppable tombstones
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in 
> for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
> node in cluster.nodelist()):
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
> run_sstablemetadata
> return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
> column_family: {}".format(keyspace, column_families))
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
> handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12779) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test

2016-10-12 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy resolved CASSANDRA-12779.
---
Resolution: Fixed

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test
> --
>
> Key: CASSANDRA-12779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12779
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/13/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x/limit_multiget_test
> {code}
> Error Message
> Expected [[48, 'http://foo.com', 42]] from SELECT * FROM clicks WHERE userid 
> IN (48, 2) LIMIT 1, but got [[2, u'http://foo.com', 42]]
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 362, in limit_multiget_test
> assert_one(cursor, "SELECT * FROM clicks WHERE userid IN (48, 2) LIMIT 
> 1", [48, 'http://foo.com', 42])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568953#comment-15568953
 ] 

Aleksey Yeschenko commented on CASSANDRA-12373:
---

What we want/need to do re:schema is change python- and java- drivers, but 
that's about it.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568968#comment-15568968
 ] 

Jeremiah Jordan commented on CASSANDRA-12373:
-

Don't forget the "snapshot" schema changing code.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12782) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-10-12 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12782:
-

 Summary: dtest failure in 
repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
 Key: CASSANDRA-12782
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12782
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
node4.log, node4_debug.log, node4_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.X_dtest/6/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges

{code}
Error Message

Subprocess sstablemetadata on keyspace: keyspace1, column_family: None exited 
with non-zero status; exit status: 1; 
stdout: 
usage: Usage: sstablemetadata [--gc_grace_seconds n] 
Dump contents of given SSTable to standard output in JSON format.
--gc_grace_secondsThe gc_grace_seconds to use when
   calculating droppable tombstones
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", line 
369, in sstable_marking_test_not_intersecting_all_ranges
for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for node 
in cluster.nodelist()):
  File 
"/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", line 
369, in 
for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for node 
in cluster.nodelist()):
  File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1021, in 
run_sstablemetadata
return handle_external_tool_process(p, "sstablemetadata on keyspace: {}, 
column_family: {}".format(keyspace, column_families))
  File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1983, in 
handle_external_tool_process
raise ToolError(cmd_args, rc, out, err)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12781) dtest failure in pushed_notifications_test.TestPushedNotifications.restart_node_test

2016-10-12 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12781:
-

 Summary: dtest failure in 
pushed_notifications_test.TestPushedNotifications.restart_node_test
 Key: CASSANDRA-12781
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12781
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.X_dtest/4/testReport/pushed_notifications_test/TestPushedNotifications/restart_node_test

{code}
Error Message

[{'change_type': u'DOWN', 'address': ('127.0.0.2', 9042)}, {'change_type': 
u'UP', 'address': ('127.0.0.2', 9042)}, {'change_type': u'DOWN', 'address': 
('127.0.0.2', 9042)}]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/pushed_notifications_test.py", line 
181, in restart_node_test
self.assertEquals(expected_notifications, len(notifications), notifications)
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12780) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x.in_order_by_without_selecting_test

2016-10-12 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12780:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x.in_order_by_without_selecting_test
 Key: CASSANDRA-12780
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12780
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/13/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_1_x/in_order_by_without_selecting_test

{code}
Error Message

Expected [[3], [4], [5], [0], [1], [2]] from SELECT v FROM test WHERE k IN (1, 
0), but got [[0], [1], [2], [3], [4], [5]]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in 
wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 4200, 
in in_order_by_without_selecting_test
assert_all(cursor, "SELECT v FROM test WHERE k IN (1, 0)", [[3], [4], [5], 
[0], [1], [2]])
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 169, in 
assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568894#comment-15568894
 ] 

Jeremiah Jordan commented on CASSANDRA-12701:
-

For the upgrade scenario users will need to update the settings themselves, 
which they can already do.  Because of the fact that users could have already 
made similar changes to these tables we can't modify them ourselves during an 
update.
We probably need a NEWS.txt entry here recommending people to do that, and also 
that they may want to delete the old data from the table before the TTL was 
applied, though there isn't really a good way to do that :/.

> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12779) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test

2016-10-12 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12779:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x.limit_multiget_test
 Key: CASSANDRA-12779
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12779
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/13/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_0_x_To_indev_2_1_x/limit_multiget_test

{code}
Error Message

Expected [[48, 'http://foo.com', 42]] from SELECT * FROM clicks WHERE userid IN 
(48, 2) LIMIT 1, but got [[2, u'http://foo.com', 42]]
{code}
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 362, 
in limit_multiget_test
assert_one(cursor, "SELECT * FROM clicks WHERE userid IN (48, 2) LIMIT 1", 
[48, 'http://foo.com', 42])
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
assert_one
assert list_res == [expected], "Expected {} from {}, but got 
{}".format([expected], query, list_res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568868#comment-15568868
 ] 

Alex Petrov commented on CASSANDRA-12373:
-

bq. Actually, we don't want to touch the schema.

Right. I've tried to fix my wording (you might have seen the edits), but it was 
still imprecise. 

Thank you for confirming the results format. I'm mostly done with {{SELECT}} 
special-casing, just need to run a bit more tests to make sure that all the 
cases are covered. Will move to adding {{2.x}} tests and then to 
{{INSERT/UPDATE}}.

bq. It would be really nice if we could keep all that code reasonably 
encapsulated too.

Gladly, most of time we just need a {{ResultSet}}, {{Partition}} and 
{{CFMetaData}}, so keeping this code aside should not be a big problem. We 
could do it similar to {{CompactTables}} class.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568738#comment-15568738
 ] 

Sylvain Lebresne commented on CASSANDRA-12373:
--

bq. We would like to change the way schema and the resultset are currently 
represented

Actually, we don't want to touch the schema. That is, to be precise, this 
ticket shouldn't change how anything is stored internally, and shouldn't thus 
change the schema tables. This does mean that fixing the output of {{DESCRIBE}} 
is actually not a direct part of this ticket, as I believe it's implemented by 
the python nowadays. We would however encourage drivers to special case super 
column familes too so that they expose {{tbl}} table of your example as:
{noformat}
CREATE TABLE tbl (
key ascii,
column1 ascii,
column2 int,
value ascii,
PRIMARY KEY (key, column1, column2)
) WITH COMPACT STORAGE;
{noformat}
and that's indeed how we want the table to behave.
would return results in form of

bq. would return results in form of

Yes, that's what we want. But this goes beyond just result-sets, we want the 
table to behave exactly as if it was the definition from above, namely that 
we'll allow queries like
{noformat}
INSERT INTO tbl (key, column1, column2, value) VALUES (...);
SELECT value FROM tbl WHERE key = 'key1' AND column1 = 'val1' AND column2 = 2;
{noformat}
but we will *not* allow
{noformat}
INSERT INTO tbl (key, column1, "") VALUES ();
SELECT "" FROM tbl WHERE key = 'key1' AND column1 = 'val1';
{noformat}

In general though, the best description of what we want this ticket to do is 
that any CQL query on a super column table should behave in 3.0/3.x _exactly_ 
as it behaved in 2.x. Which highlight the fact that we have no CQL tests for 
super columns, and a first step could be to write a decent coverage and test it 
on 2.x. And then we get it to work on 3.0/3.x.

I'll note that as I said in CASSANDRA-12335, this means we'll probably need to 
intercept INSERT/UPDATE and SELECT (raw) statements on super column table early 
and basically rewrite them to match the internal representation, plus 
post-processing result sets. It would be really nice if we could keep all that 
code reasonably encapsulated too. 

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-10-12 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-12700:

Fix Version/s: (was: 4.x)
   (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.10
   3.0.10
   2.2.9

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 2.2.9, 3.0.10, 3.10
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> 

[jira] [Commented] (CASSANDRA-12775) CQLSH should be able to pin requests to a server

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568590#comment-15568590
 ] 

Robert Stupp commented on CASSANDRA-12775:
--

cqlsh uses whitelisting policy - so you're talking to that node that you 
specified on the command line.

> CQLSH should be able to pin requests to a server
> 
>
> Key: CASSANDRA-12775
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12775
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>
> If CASSANDRA-7296 is added, it would be very helpful to be able to ensure 
> requests are sent to a specific machine for debugging purposes when using 
> cqlsh.  something as simple as PIN & UNPIN to the host provided when starting 
> cqlsh would be enough, with PIN optionally taking a new host to pin requests 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11968) More metrics on native protocol requests & responses

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568552#comment-15568552
 ] 

Robert Stupp commented on CASSANDRA-11968:
--

Not yet (haven't had much time).

> More metrics on native protocol requests & responses
> 
>
> Key: CASSANDRA-11968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11968
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> Proposal to add more metrics to the native protocol:
> - number of requests per request-type
> - number of responses by response-type
> - size of request messages in bytes
> - size of response messages in bytes
> - number of in-flight requests (from request arrival to response)
> (Will provide a patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12646) nodetool stopdaemon errors out on stopdaemon

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568538#comment-15568538
 ] 

Robert Stupp commented on CASSANDRA-12646:
--

Alright - changed. Also added a 3.X branch (it's a non-conflicting merge).

||cassandra-3.0|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...snazy:12646-nodetool-shutdown-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-3.0-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-3.0-dtest/lastSuccessfulBuild/]
||cassandra-3.X|[branch|https://github.com/apache/cassandra/compare/cassandra-3.X...snazy:12646-nodetool-shutdown-3.X]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-3.X-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-3.X-dtest/lastSuccessfulBuild/]
||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:12646-nodetool-shutdown-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12646-nodetool-shutdown-trunk-dtest/lastSuccessfulBuild/]

> nodetool stopdaemon errors out on stopdaemon
> 
>
> Key: CASSANDRA-12646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12646
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> {{nodetool stopdaemon}} works, but it prints a {{java.net.ConnectException: 
> Connection refused}} error message in {{NodeProbe.close()}} - which is 
> expected.
> Attached patch prevents that error message (i.e. it expects {{close()}} to 
> fail for {{stopdaemon}}).
> Additionally, on trunk a call to {{DD.clientInit()}} has been added, because 
> {{JVMStabilityInspector.inspectThrowable}} implicitly requires this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12274) mx4j does not work in 3.0.8

2016-10-12 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12274:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   3.10
   3.0.10
   Status: Resolved  (was: Ready to Commit)

Thanks!
Committed as 
[d5f2d0f07f852f0475386c3585bd2efb2c16249b|https://github.com/apache/cassandra/commit/d5f2d0f07f852f0475386c3585bd2efb2c16249b]
 to [cassandra-3.0|https://github.com/apache/cassandra/tree/cassandra-3.0] and 
merged to cassandra-3.X and trunk.

> mx4j does not work in 3.0.8
> ---
>
> Key: CASSANDRA-12274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: suse 12
> java 1.8.0_60
> mx4j 3.0.2
>Reporter: Ilya
>Assignee: Robert Stupp
> Fix For: 3.0.10, 3.10
>
> Attachments: mx4j-error-log.txt
>
>
> After update from 2.1 to 3.x version mx4j page begin empty
> {code}
> $ curl -i cassandra1:8081
> HTTP/1.0 200 OK
> expires: now
> Server: MX4J-HTTPD/1.0
> Cache-Control: no-cache
> pragma: no-cache
> Content-Type: text/html
> {code}
> There are no errors in the log.
> logs:
> {code}
> ~ $ grep -i mx4j /local/apache-cassandra/logs/system.log | tail -2
> INFO  [main] 2016-07-22 13:48:00,352 CassandraDaemon.java:432 - JVM 
> Arguments: [-Xloggc:/local/apache-cassandra//logs/gc.log, 
> -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, 
> -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/local/tmp, -Xss256k, 
> -XX:StringTableSize=103, -XX:+AlwaysPreTouch, -XX:+UseTLAB, 
> -XX:+ResizeTLAB, -XX:+UseNUMA, -Djava.net.preferIPv4Stack=true, -Xms512M, 
> -Xmx1G, -XX:+UseG1GC, -XX:G1RSetUpdatingPauseTimePercent=5, 
> -XX:MaxGCPauseMillis=500, -XX:InitiatingHeapOccupancyPercent=25, 
> -XX:G1HeapRegionSize=32m, -XX:ParallelGCThreads=16, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, 
> -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, 
> -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, 
> -XX:CompileCommandFile=/local/apache-cassandra//conf/hotspot_compiler, 
> -javaagent:/local/apache-cassandra//lib/jamm-0.3.0.jar, 
> -Djava.rmi.server.hostname=cassandra1.d3, 
> -Dcom.sun.management.jmxremote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, 
> -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.authenticate=false, 
> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password,
>  -Djava.library.path=/local/apache-cassandra//lib/sigar-bin, -Dmx4jport=8081, 
> -Dlogback.configurationFile=logback.xml, 
> -Dcassandra.logdir=/local/apache-cassandra//logs, 
> -Dcassandra.storagedir=/local/apache-cassandra//data, 
> -Dcassandra-pidfile=/local/apache-cassandra/run/cassandra.pid]
> INFO  [main] 2016-07-22 13:48:04,045 Mx4jTool.java:63 - mx4j successfuly 
> loaded
> {code}
> {code}
> ~ $ sudo lsof -i:8081
> COMMAND   PID  USER   FD   TYPEDEVICE SIZE/OFF NODE NAME
> java14489 cassandra   86u  IPv4 381043582  0t0  TCP 
> cassandra1.d3:sunproxyadmin (LISTEN)
> {code}
> I checked versions 3.0.8  and 3.5, result the same - not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-12 Thread snazy
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f1b742e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f1b742e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f1b742e9

Branch: refs/heads/trunk
Commit: f1b742e9df30e4331223eeb6ae9d536f8d09
Parents: b25d903 d5f2d0f
Author: Robert Stupp 
Authored: Wed Oct 12 13:45:21 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:45:21 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f1b742e9/CHANGES.txt
--
diff --cc CHANGES.txt
index c59459c,d797288..e733214
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,84 -1,5 +1,85 @@@
 -3.0.10
 +3.10
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697)
 + * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
 + * Tune compaction thread count via nodetool (CASSANDRA-12248)
 + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
 + * Include repair session IDs in repair start message (CASSANDRA-12532)
 + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039)
 + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667)
 + * Support optional backpressure strategies at the coordinator 
(CASSANDRA-9318)
 + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647)
 + * Fix cassandra-stress graphing (CASSANDRA-12237)
 + * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366, CASSANDRA-12717)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update default for 

[1/6] cassandra git commit: mx4j does not work in 3.0.8

2016-10-12 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 703b151b1 -> d5f2d0f07
  refs/heads/cassandra-3.X b25d9030a -> f1b742e9d
  refs/heads/trunk 0b82c4fc6 -> cd728d2e7


mx4j does not work in 3.0.8

patch by Robert Stupp; reviewed by T Jake Luciani for CASSANDRA-12274


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d5f2d0f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d5f2d0f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d5f2d0f0

Branch: refs/heads/cassandra-3.0
Commit: d5f2d0f07f852f0475386c3585bd2efb2c16249b
Parents: 703b151
Author: Robert Stupp 
Authored: Wed Oct 12 13:43:03 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:43:03 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 8 
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 81fb544..d797288 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
  * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index e3cd8cf..d87e0bf 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -164,6 +164,10 @@ public class CassandraDaemon
 if (FBUtilities.isWindows())
 WindowsFailedSnapshotTracker.deleteOldSnapshots();
 
+maybeInitJmx();
+
+Mx4jTool.maybeLoad();
+
 ThreadAwareSecurityManager.install();
 
 logSystemInfo();
@@ -195,8 +199,6 @@ public class CassandraDaemon
 // This should be the first write to SystemKeyspace (CASSANDRA-11742)
 SystemKeyspace.persistLocalMetadata();
 
-maybeInitJmx();
-
 Thread.setDefaultUncaughtExceptionHandler(new 
Thread.UncaughtExceptionHandler()
 {
 public void uncaughtException(Thread t, Throwable e)
@@ -349,8 +351,6 @@ public class CassandraDaemon
 exitOrFail(1, "Fatal configuration error", e);
 }
 
-Mx4jTool.maybeLoad();
-
 // Metrics
 String metricsReporterConfigFile = 
System.getProperty("cassandra.metricsReporterConfigFile");
 if (metricsReporterConfigFile != null)



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-12 Thread snazy
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f1b742e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f1b742e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f1b742e9

Branch: refs/heads/cassandra-3.X
Commit: f1b742e9df30e4331223eeb6ae9d536f8d09
Parents: b25d903 d5f2d0f
Author: Robert Stupp 
Authored: Wed Oct 12 13:45:21 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:45:21 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f1b742e9/CHANGES.txt
--
diff --cc CHANGES.txt
index c59459c,d797288..e733214
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,84 -1,5 +1,85 @@@
 -3.0.10
 +3.10
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition still breaks schema migration tasks (CASSANDRA-12697)
 + * Upgrade metrics-reporter dependencies (CASSANDRA-12089)
 + * Tune compaction thread count via nodetool (CASSANDRA-12248)
 + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232)
 + * Include repair session IDs in repair start message (CASSANDRA-12532)
 + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039)
 + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667)
 + * Support optional backpressure strategies at the coordinator 
(CASSANDRA-9318)
 + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647)
 + * Fix cassandra-stress graphing (CASSANDRA-12237)
 + * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366, CASSANDRA-12717)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update 

[6/6] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-12 Thread snazy
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cd728d2e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cd728d2e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cd728d2e

Branch: refs/heads/trunk
Commit: cd728d2e735643b266303ae3adc8aeccad1e080b
Parents: 0b82c4f f1b742e
Author: Robert Stupp 
Authored: Wed Oct 12 13:45:27 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:45:27 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cd728d2e/CHANGES.txt
--



[2/6] cassandra git commit: mx4j does not work in 3.0.8

2016-10-12 Thread snazy
mx4j does not work in 3.0.8

patch by Robert Stupp; reviewed by T Jake Luciani for CASSANDRA-12274


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d5f2d0f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d5f2d0f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d5f2d0f0

Branch: refs/heads/cassandra-3.X
Commit: d5f2d0f07f852f0475386c3585bd2efb2c16249b
Parents: 703b151
Author: Robert Stupp 
Authored: Wed Oct 12 13:43:03 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:43:03 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 8 
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 81fb544..d797288 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
  * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index e3cd8cf..d87e0bf 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -164,6 +164,10 @@ public class CassandraDaemon
 if (FBUtilities.isWindows())
 WindowsFailedSnapshotTracker.deleteOldSnapshots();
 
+maybeInitJmx();
+
+Mx4jTool.maybeLoad();
+
 ThreadAwareSecurityManager.install();
 
 logSystemInfo();
@@ -195,8 +199,6 @@ public class CassandraDaemon
 // This should be the first write to SystemKeyspace (CASSANDRA-11742)
 SystemKeyspace.persistLocalMetadata();
 
-maybeInitJmx();
-
 Thread.setDefaultUncaughtExceptionHandler(new 
Thread.UncaughtExceptionHandler()
 {
 public void uncaughtException(Thread t, Throwable e)
@@ -349,8 +351,6 @@ public class CassandraDaemon
 exitOrFail(1, "Fatal configuration error", e);
 }
 
-Mx4jTool.maybeLoad();
-
 // Metrics
 String metricsReporterConfigFile = 
System.getProperty("cassandra.metricsReporterConfigFile");
 if (metricsReporterConfigFile != null)



[3/6] cassandra git commit: mx4j does not work in 3.0.8

2016-10-12 Thread snazy
mx4j does not work in 3.0.8

patch by Robert Stupp; reviewed by T Jake Luciani for CASSANDRA-12274


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d5f2d0f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d5f2d0f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d5f2d0f0

Branch: refs/heads/trunk
Commit: d5f2d0f07f852f0475386c3585bd2efb2c16249b
Parents: 703b151
Author: Robert Stupp 
Authored: Wed Oct 12 13:43:03 2016 +0200
Committer: Robert Stupp 
Committed: Wed Oct 12 13:43:03 2016 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 8 
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 81fb544..d797288 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.10
+ * mx4j does not work in 3.0.8 (CASSANDRA-12274)
  * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
  * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
  * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5f2d0f0/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index e3cd8cf..d87e0bf 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -164,6 +164,10 @@ public class CassandraDaemon
 if (FBUtilities.isWindows())
 WindowsFailedSnapshotTracker.deleteOldSnapshots();
 
+maybeInitJmx();
+
+Mx4jTool.maybeLoad();
+
 ThreadAwareSecurityManager.install();
 
 logSystemInfo();
@@ -195,8 +199,6 @@ public class CassandraDaemon
 // This should be the first write to SystemKeyspace (CASSANDRA-11742)
 SystemKeyspace.persistLocalMetadata();
 
-maybeInitJmx();
-
 Thread.setDefaultUncaughtExceptionHandler(new 
Thread.UncaughtExceptionHandler()
 {
 public void uncaughtException(Thread t, Throwable e)
@@ -349,8 +351,6 @@ public class CassandraDaemon
 exitOrFail(1, "Fatal configuration error", e);
 }
 
-Mx4jTool.maybeLoad();
-
 // Metrics
 String metricsReporterConfigFile = 
System.getProperty("cassandra.metricsReporterConfigFile");
 if (metricsReporterConfigFile != null)



[jira] [Updated] (CASSANDRA-12274) mx4j does not work in 3.0.8

2016-10-12 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12274:
-
Summary: mx4j does not work in 3.0.8  (was: mx4j not work in 3.0.8)

> mx4j does not work in 3.0.8
> ---
>
> Key: CASSANDRA-12274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: suse 12
> java 1.8.0_60
> mx4j 3.0.2
>Reporter: Ilya
>Assignee: Robert Stupp
> Fix For: 3.0.x
>
> Attachments: mx4j-error-log.txt
>
>
> After update from 2.1 to 3.x version mx4j page begin empty
> {code}
> $ curl -i cassandra1:8081
> HTTP/1.0 200 OK
> expires: now
> Server: MX4J-HTTPD/1.0
> Cache-Control: no-cache
> pragma: no-cache
> Content-Type: text/html
> {code}
> There are no errors in the log.
> logs:
> {code}
> ~ $ grep -i mx4j /local/apache-cassandra/logs/system.log | tail -2
> INFO  [main] 2016-07-22 13:48:00,352 CassandraDaemon.java:432 - JVM 
> Arguments: [-Xloggc:/local/apache-cassandra//logs/gc.log, 
> -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, 
> -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/local/tmp, -Xss256k, 
> -XX:StringTableSize=103, -XX:+AlwaysPreTouch, -XX:+UseTLAB, 
> -XX:+ResizeTLAB, -XX:+UseNUMA, -Djava.net.preferIPv4Stack=true, -Xms512M, 
> -Xmx1G, -XX:+UseG1GC, -XX:G1RSetUpdatingPauseTimePercent=5, 
> -XX:MaxGCPauseMillis=500, -XX:InitiatingHeapOccupancyPercent=25, 
> -XX:G1HeapRegionSize=32m, -XX:ParallelGCThreads=16, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, 
> -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, 
> -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, 
> -XX:CompileCommandFile=/local/apache-cassandra//conf/hotspot_compiler, 
> -javaagent:/local/apache-cassandra//lib/jamm-0.3.0.jar, 
> -Djava.rmi.server.hostname=cassandra1.d3, 
> -Dcom.sun.management.jmxremote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, 
> -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.authenticate=false, 
> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password,
>  -Djava.library.path=/local/apache-cassandra//lib/sigar-bin, -Dmx4jport=8081, 
> -Dlogback.configurationFile=logback.xml, 
> -Dcassandra.logdir=/local/apache-cassandra//logs, 
> -Dcassandra.storagedir=/local/apache-cassandra//data, 
> -Dcassandra-pidfile=/local/apache-cassandra/run/cassandra.pid]
> INFO  [main] 2016-07-22 13:48:04,045 Mx4jTool.java:63 - mx4j successfuly 
> loaded
> {code}
> {code}
> ~ $ sudo lsof -i:8081
> COMMAND   PID  USER   FD   TYPEDEVICE SIZE/OFF NODE NAME
> java14489 cassandra   86u  IPv4 381043582  0t0  TCP 
> cassandra1.d3:sunproxyadmin (LISTEN)
> {code}
> I checked versions 3.0.8  and 3.5, result the same - not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1025#comment-1025
 ] 

Alex Petrov edited comment on CASSANDRA-12373 at 10/12/16 10:56 AM:


I've started collecting information on what needs to be done. I just want to 
clarify the behaviour first:

We would like to change the way schema and the resultset are currently 
represented (instead of the {{"" map}} to two actual 
columns: {{column}} (depending on the current clustering key size) and 
{{value}}, just as it was presented in example in [CASSANDRA-12335], although 
preserve their internal representation (internally, map type will still be used 
for storage).

In CQL terms
{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
"" map,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

would return results in form of  

{code}
 key  | column1 | column2 | value  |
--+-+-++
 key1 | val1| 1   | value1 |
 key1 | val1| 2   | value2 |
 key1 | val1| 3   | value3 |
 key1 | val2| 1   | value1 |
 key1 | val2| 2   | value2 |
 key1 | val2| 3   | value3 |
{code}

(note that {{column2}} is not clustering as [~slebresne] described in comment).

And this kind of special-casing will be valid for both read and write paths.


was (Author: ifesdjeen):
I've started collecting information on what needs to be done. I just want to 
clarify the behaviour first:

We would like to change the way schema and the resultset are currently 
represented (instead of the {{"" map}} to two actual 
columns: {{column}} (depending on the current clustering key size) and 
{{value}}, just as it was presented in example in [CASSANDRA-12335], although 
preserve their internal representation (internally, map type will still be used 
for storage).

In CQL terms
{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
"" map,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

would become 

{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
column2 int,
value ascii,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

(note that {{column2}} is not clustering as [~slebresne] described in comment).

And this kind of special-casing will be valid for both read and write paths.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5988) Make hint TTL customizable

2016-10-12 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568350#comment-15568350
 ] 

Aleksey Yeschenko commented on CASSANDRA-5988:
--

[~kohlisankalp] Will need to either modify {{HintsDispatcher}} logic to take 
'maxhintttl' into account (compared to current time - hint's creationTime), or 
do the same even earlier, in {{HintsReader}}. The former is probably cleaner; 
the latter can be done a bit more efficiently - skipping hint body entirely if 
gcgs/creationTime/maxhinttl combination says the hint is basically dead.

Don't have time atm to do it, but [~bdeggleston] should be pretty familiar with 
that code, as he added compression logic - I can review.

> Make hint TTL customizable
> --
>
> Key: CASSANDRA-5988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5988
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Oleg Kibirev
>Assignee: Vishy Kasar
>  Labels: patch
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 5988.txt
>
>
> Currently time to live for stored hints is hardcoded to be gc_grace_seconds. 
> This causes problems for applications using backdated deletes as a form of 
> optimistic locking. Hints for updates made to the same data on which delete 
> was attempted can persist for days, making it impossible to determine if 
> delete succeeded by doing read(ALL) after a reasonable delay. We need a way 
> to explicitly configure hint TTL, either through schema parameter or through 
> a yaml file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-10303) streaming for 'nodetool rebuild' fails after adding a datacenter

2016-10-12 Thread Abhinav Johri (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinav Johri updated CASSANDRA-10303:
--
Comment: was deleted

(was: Hey did you find a solution for this problem. Facing the same problem 
while rebuilding nodes.)

> streaming for 'nodetool rebuild' fails after adding a datacenter 
> -
>
> Key: CASSANDRA-10303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10303
> Project: Cassandra
>  Issue Type: Bug
> Environment: jdk1.7
> cassandra 2.1.8
>Reporter: zhaoyan
>
> we add another datacenter.
> use nodetool rebuild DC1
> stream from some node of old datacenter always hang up with these exception:
> {code}
> ERROR [Thread-1472] 2015-09-10 19:24:53,091 CassandraDaemon.java:223 - 
> Exception in thread Thread[Thread-1472,5,RMI Runtime]
> java.lang.RuntimeException: java.io.IOException: Connection timed out
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
> Caused by: java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_60]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_60]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_60]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:172)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> ... 1 common frames omitted
> {code}
> i must restart node to stop current rebuild, and rebuild agagin and again to 
> success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10303) streaming for 'nodetool rebuild' fails after adding a datacenter

2016-10-12 Thread Abhinav Johri (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568282#comment-15568282
 ] 

Abhinav Johri commented on CASSANDRA-10303:
---

Hey did you find a solution for this problem. Facing the same problem while 
rebuilding nodes.

> streaming for 'nodetool rebuild' fails after adding a datacenter 
> -
>
> Key: CASSANDRA-10303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10303
> Project: Cassandra
>  Issue Type: Bug
> Environment: jdk1.7
> cassandra 2.1.8
>Reporter: zhaoyan
>
> we add another datacenter.
> use nodetool rebuild DC1
> stream from some node of old datacenter always hang up with these exception:
> {code}
> ERROR [Thread-1472] 2015-09-10 19:24:53,091 CassandraDaemon.java:223 - 
> Exception in thread Thread[Thread-1472,5,RMI Runtime]
> java.lang.RuntimeException: java.io.IOException: Connection timed out
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
> Caused by: java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_60]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_60]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_60]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:172)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> ... 1 common frames omitted
> {code}
> i must restart node to stop current rebuild, and rebuild agagin and again to 
> success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10303) streaming for 'nodetool rebuild' fails after adding a datacenter

2016-10-12 Thread Abhinav Johri (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568280#comment-15568280
 ] 

Abhinav Johri commented on CASSANDRA-10303:
---

Hey did you find a solution for this problem. Facing the same problem while 
rebuilding nodes.

> streaming for 'nodetool rebuild' fails after adding a datacenter 
> -
>
> Key: CASSANDRA-10303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10303
> Project: Cassandra
>  Issue Type: Bug
> Environment: jdk1.7
> cassandra 2.1.8
>Reporter: zhaoyan
>
> we add another datacenter.
> use nodetool rebuild DC1
> stream from some node of old datacenter always hang up with these exception:
> {code}
> ERROR [Thread-1472] 2015-09-10 19:24:53,091 CassandraDaemon.java:223 - 
> Exception in thread Thread[Thread-1472,5,RMI Runtime]
> java.lang.RuntimeException: java.io.IOException: Connection timed out
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
> Caused by: java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_60]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_60]
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_60]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
> ~[na:1.7.0_60]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_60]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:172)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> ... 1 common frames omitted
> {code}
> i must restart node to stop current rebuild, and rebuild agagin and again to 
> success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-12 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1025#comment-1025
 ] 

Alex Petrov edited comment on CASSANDRA-12373 at 10/12/16 9:31 AM:
---

I've started collecting information on what needs to be done. I just want to 
clarify the behaviour first:

We would like to change the way schema and the resultset are currently 
represented (instead of the {{"" map}} to two actual 
columns: {{column}} (depending on the current clustering key size) and 
{{value}}, just as it was presented in example in [CASSANDRA-12335], although 
preserve their internal representation (internally, map type will still be used 
for storage).

In CQL terms
{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
"" map,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

would become 

{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
column2 int,
value ascii,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

(note that {{column2}} is not clustering as [~slebresne] described in comment).

And this kind of special-casing will be valid for both read and write paths.


was (Author: ifesdjeen):
I've started collecting information on what needs to be done. I just want to 
clarify the behaviour first:

We would like to change the way schema and the resultset are currently 
represented (instead of the {{"" map}} to two actual 
columns: {{column}} (depending on the current clustering key size) and 
{{value}}, just as it was presented in example in [CASSANDRA-12335].

In CQL terms
{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
"" map,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

would become 

{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
column2 int,
value ascii,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

(note that {{column2}} is not clustering as [~slebresne] described in comment).

And this kind of special-casing will be valid for both read and write paths.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568186#comment-15568186
 ] 

Robert Stupp commented on CASSANDRA-10855:
--

Just asking because the jar's been changed but not the version (looks like a 
change to a released version).

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568181#comment-15568181
 ] 

Ben Manes commented on CASSANDRA-10855:
---

I rebased and updated the jar in the PR. It's the same as our previous 
discussion. The upgrade is maintenance improvements since 2.2.6 in use.

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568161#comment-15568161
 ] 

Robert Stupp commented on CASSANDRA-10855:
--

Haven't looked thoroughly through the code yet. One question: 
{{lib/caffeine-2.3.3.jar}} has been changed in the patch. Is it a new version 
of caffeine?

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568144#comment-15568144
 ] 

Robert Stupp commented on CASSANDRA-11534:
--

+1 (cqlsh-tests pending)


> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11534:
-
Status: Ready to Commit  (was: Patch Available)

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568060#comment-15568060
 ] 

Robert Stupp commented on CASSANDRA-11534:
--

Alright - will do the review later today (sorry for the delay).

> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-10-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568033#comment-15568033
 ] 

Stefania commented on CASSANDRA-11534:
--

The driver pull request has been merged. I've updated the cqlsh driver and 
relaunched the tests:

||3.X||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11534-cqlsh-3.X]|[patch|https://github.com/stef1927/cassandra/commits/11534-cqlsh]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11534-cqlsh-3.X-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11534-cqlsh-cqlsh-tests/]|


> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-10-12 Thread Rajesh Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568006#comment-15568006
 ] 

Rajesh Radhakrishnan commented on CASSANDRA-12700:
--

Thank you [~jjirsa] and [~beobal] for the help. I will be getting the patched 
release soon when ever it is ready.

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 2.2.x, 3.0.x, 3.x, 4.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  

[jira] [Updated] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Ben Manes (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Manes updated CASSANDRA-10855:
--
Attachment: CASSANDRA-10855.patch

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
> Attachments: CASSANDRA-10855.patch
>
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches

2016-10-12 Thread Ben Manes (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Manes updated CASSANDRA-10855:
--
Status: Patch Available  (was: Open)

> Use Caffeine (W-TinyLFU) for on-heap caches
> ---
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ben Manes
>  Labels: performance
>
> Cassandra currently uses 
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap] 
> for performance critical caches (key, counter) and Guava's cache for 
> non-critical (auth, metrics, security). All of these usages have been 
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the 
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which 
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency] 
> hit rates. It performs particularly well in database and search traces, is 
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar 
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM 
> due to some optimizations not being ported over. This change results in 
> faster reads and not creating garbage as a side-effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12778) Tombstones not being deleted when only_purge_repaired_tombstones is enabled

2016-10-12 Thread Arvind Nithrakashyap (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvind Nithrakashyap updated CASSANDRA-12778:
-
Summary: Tombstones not being deleted when only_purge_repaired_tombstones 
is enabled  (was: Tombstones not being Deleted when 
only_purge_repaired_tombstones is enabled)

> Tombstones not being deleted when only_purge_repaired_tombstones is enabled
> ---
>
> Key: CASSANDRA-12778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Arvind Nithrakashyap
>Priority: Critical
>
> When we use only_purge_repaired_tombstones for compaction, we noticed that 
> tombstones are no longer being deleted.
> {noformat}compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
> 'only_purge_repaired_tombstones': 'true'}{noformat}
> The root cause for this seems to be caused by the fact that repair itself 
> issues a flush which in turn leads to a new sstable being created (which is 
> not in the repair set). It looks like we do have some old data in this 
> sstable because of only tombstones older than that timestamp are getting 
> deleted even though many more keys have been repaired. 
> Fundamentally it looks like flush and repair can race with each other and 
> with leveled compaction, the flush creates a new sstable at level 0 and 
> removes the older sstable (the one that is picked for repair). Since repair 
> itself seems to issue multiple flushes, the level 0 sstable never gets 
> repaired and hence tombstones never get deleted. 
> We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12778) Tombstones not being Deleted when only_purge_repaired_tombstones is enabled

2016-10-12 Thread Arvind Nithrakashyap (JIRA)
Arvind Nithrakashyap created CASSANDRA-12778:


 Summary: Tombstones not being Deleted when 
only_purge_repaired_tombstones is enabled
 Key: CASSANDRA-12778
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12778
 Project: Cassandra
  Issue Type: Bug
Reporter: Arvind Nithrakashyap
Priority: Critical


When we use only_purge_repaired_tombstones for compaction, we noticed that 
tombstones are no longer being deleted.

{noformat}compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
'only_purge_repaired_tombstones': 'true'}{noformat}

The root cause for this seems to be caused by the fact that repair itself 
issues a flush which in turn leads to a new sstable being created (which is not 
in the repair set). It looks like we do have some old data in this sstable 
because of only tombstones older than that timestamp are getting deleted even 
though many more keys have been repaired. 

Fundamentally it looks like flush and repair can race with each other and with 
leveled compaction, the flush creates a new sstable at level 0 and removes the 
older sstable (the one that is picked for repair). Since repair itself seems to 
issue multiple flushes, the level 0 sstable never gets repaired and hence 
tombstones never get deleted. 

We have already included the fix for CASSANDRA-12703 while testing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-12 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-12777:
--
Description: 
The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does not work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.

  was:
The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.


> Optimize the vnode allocation for single replica per DC
> ---
>
> Key: CASSANDRA-12777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
> for the situation that there are multiple replicas per DC.
> In our production environment, most cluster only has one replica, in this 
> case, the algorithm does not work perfectly. It always tries to split token 
> ranges by half, so that the ownership of "min" node could go as low as ~60% 
> compared to avg.
> So for single replica case, I'm working on a new algorithm, which is based on 
> Branimir's previous commit, to split token ranges by "some" percentage, 
> instead of always by half. In this way, we can get a very small variation of 
> the ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-12 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567784#comment-15567784
 ] 

Dikang Gu commented on CASSANDRA-12777:
---

I have a draft patch, and there is the sample results:
{code}
4 vnode, 250 nodes, max 1.11 min 0.89 stddev 0.0734
16 vnode, 250 nodes, max 1.04 min 0.97 stddev 0.0179
64 vnode, 250 nodes, max 1.01 min 0.99 stddev 0.0044
{code}

Will clean it a bit and send it out for review.

> Optimize the vnode allocation for single replica per DC
> ---
>
> Key: CASSANDRA-12777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
> for the situation that there are multiple replicas per DC.
> In our production environment, most cluster only has one replica, in this 
> case, the algorithm does work perfectly. It always tries to split token 
> ranges by half, so that the ownership of "min" node could go as low as ~60% 
> compared to avg.
> So for single replica case, I'm working on a new algorithm, which is based on 
> Branimir's previous commit, to split token ranges by "some" percentage, 
> instead of always by half. In this way, we can get a very small variation of 
> the ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Merge branch 'cassandra-3.X' into trunk [Forced Update!]

2016-10-12 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6e9c3db56 -> 0b82c4fc6 (forced update)


Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b82c4fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b82c4fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b82c4fc

Branch: refs/heads/trunk
Commit: 0b82c4fc6f6b1fc8a6cb8f9e5a6c00f739dd5e44
Parents: 8e6a58c b25d903
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:27:20 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 23:28:48 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b82c4fc/CHANGES.txt
--
diff --cc CHANGES.txt
index 57ff13c,c59459c..cfc46ad
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -125,6 -119,6 +125,7 @@@ Merged from 2.2
   * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
   * Fail repair on non-existing table (CASSANDRA-12279)
   * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
++ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
  3.8, 3.9



[jira] [Created] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-12 Thread Dikang Gu (JIRA)
Dikang Gu created CASSANDRA-12777:
-

 Summary: Optimize the vnode allocation for single replica per DC
 Key: CASSANDRA-12777
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
 Project: Cassandra
  Issue Type: Improvement
Reporter: Dikang Gu
Assignee: Dikang Gu
 Fix For: 3.x


The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-12 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15567769#comment-15567769
 ] 

Jeff Jirsa commented on CASSANDRA-12296:


{quote}
is not true, at least from my testing with rebuilds... I couldn't force this 
error message to occur with repair, but maybe I'm missing something.
{quote}

It can't hit with repair because that code block requires 
{{strat.getReplicationFactor() == 1}} - in that case, there would be nothing to 
repair.

The case I was imagining was bootstrap related, which has a similar error 
message, but is actually in {{getAllRangesWithStrictSourcesFor}} rather than 
{{getRangeFetchMap}} - so I withdraw my comment, and insert foot firmly into 
mouth - I can't see any way to trigger this with NTS, so perhaps "switch to 
NTS" is the right fix. 

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)