[jira] [Assigned] (CASSANDRA-13622) Better config validation/documentation

2017-07-12 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13622:


Assignee: (was: ZhaoYang)

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Priority: Minor
>  Labels: lhf
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13622) Better config validation/documentation

2017-07-12 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13622:


Assignee: ZhaoYang

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13657) Materialized Views: Index MV on TTL'ed column produces orphanized view entry if another column keeps entry live

2017-07-12 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085063#comment-16085063
 ] 

Kurt Greaves commented on CASSANDRA-13657:
--

[~fsander] 
bq. From my understanding that is not possible, since all PK parts must be NOT 
NULL
That is correct, going back to the original implementation there was a lot of 
discussion about this and it was decided to enforce all PK parts to be NOT 
NULL, to simplify the implementation for 3.0. However at the time it was never 
confirmed whether it would be a strict requirement for the future. Hinted at as 
a hard problem to solve, but not ruled out entirely. See 
[original|https://issues.apache.org/jira/browse/CASSANDRA-6477?focusedCommentId=14627670=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14627670]
 ticket and comments surrounding it: 

My concern is that tying MV "components" to base "components" makes it more 
difficult to lift the {{IS NOT NULL}} requirement. That requirement might be 
perfectly fine, but I think a decision needs to be made beforehand if that is 
the path we'll go down with MV's.

This really applies in general to MV bugs/existing tickets however (and is 
probably not the only issue), as it likely has an impact on how we fix all of 
them (i.e do we cram more "features" in to fix the bugs, or do we aim to 
re-write this stuff to make Cassandra more view-friendly).

bq. The way I see it, a view-row's liveness is actually different from an 
ordinary row's liveness. So, it makes sense to have code representing that 
difference, instead of squeezing the new definition into the old mechanism and 
constantly having to patch stuff left and right.
To clarify - you're totally correct, however I'm more asking the question of 
"do we want that to always be the case?". Personally I wouldn't mind as it 
simplifies things (especially this case), but I'm certainly no authority on the 
matter.

Anyway, it would be good to get more people in on the discussion (and hopefully 
some of the original developers) so I'll ping the ML.

> Materialized Views: Index MV on TTL'ed column produces orphanized view entry 
> if another column keeps entry live
> ---
>
> Key: CASSANDRA-13657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13657
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Fridtjof Sander
>Assignee: Krishna Dattu Koneru
>  Labels: materializedviews, ttl
>
> {noformat}
> CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k));
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (a, k);
> INSERT INTO t (k) VALUES (1);
> UPDATE t USING TTL 5 SET a = 10 WHERE k = 1;
> UPDATE t SET b = 100 WHERE k = 1;
> SELECT * from t; SELECT * from mv;
>  k | a  | b
> ---++-
>  1 | 10 | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- 5 seconds later
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+-
>  1 | null | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- that view entry's liveness-info is (probably) dead, but the entry is kept 
> alive by b=100
> DELETE b FROM t WHERE k=1;
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+--
>  1 | null | null
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> DELETE FROM t WHERE k=1;
> cqlsh:test> SELECT * from t; SELECT * from mv;
>  k | a | b
> ---+---+---
> (0 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- deleting the base-entry doesn't help, because the view-key can not be 
> constructed anymore (a=10 already expired)
> {noformat}
> The problem here is that although the view-entry's liveness-info (probably) 
> expired correctly a regular column (`b`) keeps the view-entry live. It should 
> have disappeared since it's indexed column (`a`) expired in the corresponding 
> base-row. This is pretty severe, since that view-entry is now orphanized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13689) Update development docs with correct GH URL for new cassandra-dtest location

2017-07-12 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-13689:

Labels: docs-impacting lhf  (was: )

> Update development docs with correct GH URL for new cassandra-dtest location
> 
>
> Key: CASSANDRA-13689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13689
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Nate McCall
>  Labels: docs-impacting, lhf
>
> Specifically the URL on this page:
> http://cassandra.apache.org/doc/latest/development/testing.html#dtests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13689) Update development docs with correct GH URL for new cassandra-dtest location

2017-07-12 Thread Nate McCall (JIRA)
Nate McCall created CASSANDRA-13689:
---

 Summary: Update development docs with correct GH URL for new 
cassandra-dtest location
 Key: CASSANDRA-13689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13689
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Nate McCall


Specifically the URL on this page:

http://cassandra.apache.org/doc/latest/development/testing.html#dtests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13634) Create repository and do initial import for cassandra-dtest

2017-07-12 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall resolved CASSANDRA-13634.
-
Resolution: Fixed

Repository contents and history imported to ASF. 

> Create repository and do initial import for cassandra-dtest
> ---
>
> Key: CASSANDRA-13634
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13634
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Nate McCall
>Assignee: Nate McCall
>
> Given the completion of CASSANDRA-13584, it's time to create the repository 
> and do the initial import of code. 
> Note that this repo will be configured similarly our main one in that pull 
> requests will go to {{pr@c.a.o}} address. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[25/50] cassandra-dtest git commit: New test for CASSANDRA-11720; Changing `max_hint_window_in_ms` at runtime

2017-07-12 Thread zznate
New test for CASSANDRA-11720; Changing `max_hint_window_in_ms` at runtime


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6540ba4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6540ba4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6540ba4b

Branch: refs/heads/master
Commit: 6540ba4be1623e330376895e263030f4811e2048
Parents: dc8cb3f
Author: mck 
Authored: Wed May 3 12:02:08 2017 +1000
Committer: Philip Thompson 
Committed: Wed May 10 19:58:51 2017 -0400

--
 hintedhandoff_test.py | 25 ++---
 1 file changed, 22 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/6540ba4b/hintedhandoff_test.py
--
diff --git a/hintedhandoff_test.py b/hintedhandoff_test.py
index 7fb8e20..1ed3305 100644
--- a/hintedhandoff_test.py
+++ b/hintedhandoff_test.py
@@ -42,13 +42,13 @@ class TestHintedHandoffConfig(Tester):
 self.assertEqual('', err)
 return out
 
-def _do_hinted_handoff(self, node1, node2, enabled):
+def _do_hinted_handoff(self, node1, node2, enabled, keyspace='ks'):
 """
 Test that if we stop one node the other one
 will store hints only when hinted handoff is enabled
 """
 session = self.patient_exclusive_cql_connection(node1)
-create_ks(session, 'ks', 2)
+create_ks(session, keyspace, 2)
 create_c1c2_table(self, session)
 
 node2.stop(wait_other_notice=True)
@@ -64,7 +64,7 @@ class TestHintedHandoffConfig(Tester):
 node1.stop(wait_other_notice=True)
 
 # Check node2 for all the keys that should have been delivered via HH 
if enabled or not if not enabled
-session = self.patient_exclusive_cql_connection(node2, keyspace='ks')
+session = self.patient_exclusive_cql_connection(node2, 
keyspace=keyspace)
 for n in xrange(0, 100):
 if enabled:
 query_c1c2(session, n, ConsistencyLevel.ONE)
@@ -121,6 +121,25 @@ class TestHintedHandoffConfig(Tester):
 
 self._do_hinted_handoff(node1, node2, True)
 
+def hintedhandoff_setmaxwindow_test(self):
+"""
+Test global hinted handoff against max_hint_window_in_ms update via 
nodetool
+"""
+node1, node2 = self._start_two_node_cluster({'hinted_handoff_enabled': 
True, "max_hint_window_in_ms": 30})
+
+for node in node1, node2:
+res = self._launch_nodetool_cmd(node, 'statushandoff')
+self.assertEqual('Hinted handoff is running', res.rstrip())
+
+res = self._launch_nodetool_cmd(node, 'getmaxhintwindow')
+self.assertEqual('Current max hint window: 30 ms', res.rstrip())
+self._do_hinted_handoff(node1, node2, True)
+node1.start(wait_other_notice=True)
+self._launch_nodetool_cmd(node, 'setmaxhintwindow 1')
+res = self._launch_nodetool_cmd(node, 'getmaxhintwindow')
+self.assertEqual('Current max hint window: 1 ms', res.rstrip())
+self._do_hinted_handoff(node1, node2, False, keyspace='ks2')
+
 def hintedhandoff_dc_disabled_test(self):
 """
 Test global hinted handoff enabled with the dc disabled


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[30/50] cassandra-dtest git commit: Add a sleep after compaction to give it time before checking SSTable directory for files (CASSANDRA-13182)

2017-07-12 Thread zznate
Add a sleep after compaction to give it time before checking SSTable directory 
for files (CASSANDRA-13182)


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/538d658e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/538d658e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/538d658e

Branch: refs/heads/master
Commit: 538d658e0fb6b067ffeedd250c5997e2e77ad735
Parents: f148942
Author: Lerh Chuan Low 
Authored: Tue May 16 16:09:32 2017 +1000
Committer: Philip Thompson 
Committed: Tue May 16 09:55:20 2017 -0400

--
 sstableutil_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/538d658e/sstableutil_test.py
--
diff --git a/sstableutil_test.py b/sstableutil_test.py
index a8f4487..0886a26 100644
--- a/sstableutil_test.py
+++ b/sstableutil_test.py
@@ -1,6 +1,7 @@
 import glob
 import os
 import subprocess
+import time
 
 from ccmlib import common
 from ccmlib.node import ToolError
@@ -40,6 +41,7 @@ class SSTableUtilTest(Tester):
 self.assertEqual(0, len(tmpfiles))
 
 node.compact()
+time.sleep(5)
 finalfiles, tmpfiles = self._check_files(node, KeyspaceName, TableName)
 self.assertEqual(0, len(tmpfiles))
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[41/50] cassandra-dtest git commit: Removed cluster reuse from codebase

2017-07-12 Thread zznate
Removed cluster reuse from codebase


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/1cc49419
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/1cc49419
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/1cc49419

Branch: refs/heads/master
Commit: 1cc4941916a3df199821f974e47acd667f65c2b8
Parents: 93aa314
Author: MichaelHamm 
Authored: Mon Jun 19 11:06:13 2017 -0700
Committer: Philip Thompson 
Committed: Tue Jun 20 12:09:35 2017 +0200

--
 INSTALL.md  |  4 
 README.md   |  3 +--
 cqlsh_tests/cqlsh_copy_tests.py | 24 +---
 dtest.py| 25 -
 upgrade_tests/cql_tests.py  | 14 +-
 5 files changed, 3 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/1cc49419/INSTALL.md
--
diff --git a/INSTALL.md b/INSTALL.md
index 0e9e9e1..69985c3 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -129,10 +129,6 @@ will often need to modify them in some fashion at some 
later point:
  cd ~/git/cstar/cassandra-dtest
  PRINT_DEBUG=true nosetests -x -s -v putget_test.py
 
-* To reuse cassandra clusters when possible, set the environment variable 
REUSE_CLUSTER
-
-REUSE_CLUSTER=true nosetests -s -v cql_tests.py
-
 * Some tests will not run with vnodes enabled (you'll see a "SKIP: Test 
disabled for vnodes" message in that case). Use the provided runner script 
instead:
 
 ./run_dtests.py --vnodes false --nose-options "-x -s -v" 
topology_test.py:TestTopology.movement_test

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/1cc49419/README.md
--
diff --git a/README.md b/README.md
index 79a65e0..ba32c3c 100644
--- a/README.md
+++ b/README.md
@@ -43,8 +43,7 @@ environment variable (that still will have precedence if 
given though).
 Existing tests are probably the best place to start to look at how to write
 tests.
 
-Each test spawns a new fresh cluster and tears it down after the test, unless
-`REUSE_CLUSTER` is set to true. Then some tests will share cassandra 
instances. If a
+Each test spawns a new fresh cluster and tears it down after the test. If a
 test fails, the logs for the node are saved in a `logs/` directory
 for analysis (it's not perfect but has been good enough so far, I'm open to
 better suggestions).

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/1cc49419/cqlsh_tests/cqlsh_copy_tests.py
--
diff --git a/cqlsh_tests/cqlsh_copy_tests.py b/cqlsh_tests/cqlsh_copy_tests.py
index 43d33db..8501497 100644
--- a/cqlsh_tests/cqlsh_copy_tests.py
+++ b/cqlsh_tests/cqlsh_copy_tests.py
@@ -25,8 +25,7 @@ from ccmlib.common import is_win
 from cqlsh_tools import (DummyColorMap, assert_csvs_items_equal, csv_rows,
  monkeypatch_driver, random_list, unmonkeypatch_driver,
  write_rows_to_csv)
-from dtest import (DISABLE_VNODES, Tester, canReuseCluster, debug,
-   freshCluster, warning, create_ks)
+from dtest import (DISABLE_VNODES, Tester, debug, warning, create_ks)
 from tools.data import rows_to_list
 from tools.decorators import since
 from tools.metadata_wrapper import (UpdatingClusterMetadataWrapper,
@@ -55,7 +54,6 @@ class UTC(datetime.tzinfo):
 return datetime.timedelta(0)
 
 
-@canReuseCluster
 class CqlshCopyTest(Tester):
 """
 Tests the COPY TO and COPY FROM features in cqlsh.
@@ -2359,23 +2357,18 @@ class CqlshCopyTest(Tester):
 new_results = list(self.session.execute("SELECT * FROM testcopyto"))
 self.assertEqual(results, new_results)
 
-@freshCluster()
 def test_round_trip_murmur3(self):
 self._test_round_trip(nodes=3, partitioner="murmur3")
 
-@freshCluster()
 def test_round_trip_random(self):
 self._test_round_trip(nodes=3, partitioner="random")
 
-@freshCluster()
 def test_round_trip_order_preserving(self):
 self._test_round_trip(nodes=3, partitioner="order")
 
-@freshCluster()
 def test_round_trip_byte_ordered(self):
 self._test_round_trip(nodes=3, partitioner="byte")
 
-@freshCluster()
 def test_source_copy_round_trip(self):
 """
 Like test_round_trip, but uses the SOURCE command to execute the
@@ -2523,7 +2516,6 @@ class CqlshCopyTest(Tester):
 
 return ret
 
-@freshCluster()
 def test_bulk_round_trip_default(self):
 """
 Test bulk import with default stress import (one row per operation)
@@ 

[39/50] cassandra-dtest git commit: Restrict size estimates multi-dc test to run on 3.0.11+

2017-07-12 Thread zznate
Restrict size estimates multi-dc test to run on 3.0.11+


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/3cf276e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/3cf276e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/3cf276e9

Branch: refs/heads/master
Commit: 3cf276e966f253a49df91293a1a0b46620192c59
Parents: f69ced0
Author: Joel Knighton 
Authored: Mon Jun 19 16:40:14 2017 -0500
Committer: Joel Knighton 
Committed: Mon Jun 19 16:40:14 2017 -0500

--
 topology_test.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/3cf276e9/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index 7604ebe..15827f3 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -31,6 +31,7 @@ class TestTopology(Tester):
 
 node1.stop(gently=False)
 
+@since('3.0.11')
 def size_estimates_multidc_test(self):
 """
 Test that primary ranges are correctly generated on


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[12/50] cassandra-dtest git commit: Make simple_bootstrap_test_small_keepalive_period faster/predictable with byteman

2017-07-12 Thread zznate
Make simple_bootstrap_test_small_keepalive_period faster/predictable with 
byteman


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/737aab2f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/737aab2f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/737aab2f

Branch: refs/heads/master
Commit: 737aab2fac109a2f8d3c840bc01f8bf22ab5fbe1
Parents: 795d91c
Author: Paulo Motta 
Authored: Thu Mar 30 19:43:58 2017 -0300
Committer: Paulo Motta 
Committed: Fri Mar 31 13:50:31 2017 -0300

--
 bootstrap_test.py   | 23 ++-
 byteman/stream_5s_sleep.btm | 13 +
 2 files changed, 27 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/737aab2f/bootstrap_test.py
--
diff --git a/bootstrap_test.py b/bootstrap_test.py
index 048ab18..1d149e6 100644
--- a/bootstrap_test.py
+++ b/bootstrap_test.py
@@ -148,23 +148,28 @@ class TestBootstrap(BaseBootstrapTest):
 2*streaming_keep_alive_period_in_secs to receive a single sstable
 """
 cluster = self.cluster
-
cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec':
 1,
-  
'streaming_socket_timeout_in_ms': 1000,
-  
'streaming_keep_alive_period_in_secs': 1})
+
cluster.set_configuration_options(values={'streaming_socket_timeout_in_ms': 
1000,
+  
'streaming_keep_alive_period_in_secs': 2})
 
 # Create a single node cluster
 cluster.populate(1)
 node1 = cluster.nodelist()[0]
+
+debug("Setting up byteman on {}".format(node1.name))
+# set up byteman
+node1.byteman_port = '8100'
+node1.import_config_files()
+
 cluster.start(wait_other_notice=True)
 
 # Create more than one sstable larger than 1MB
-node1.stress(['write', 'n=50K', '-rate', 'threads=8', '-schema',
-  'compaction(strategy=SizeTieredCompactionStrategy, 
enabled=false)'])
-cluster.flush()
-node1.stress(['write', 'n=50K', '-rate', 'threads=8', '-schema',
+node1.stress(['write', 'n=1K', '-rate', 'threads=8', '-schema',
   'compaction(strategy=SizeTieredCompactionStrategy, 
enabled=false)'])
 cluster.flush()
-self.assertGreater(node1.get_sstables("keyspace1", "standard1"), 1)
+
+debug("Submitting byteman script to {} to".format(node1.name))
+# Sleep longer than streaming_socket_timeout_in_ms to make sure the 
node will not be killed
+node1.byteman_submit(['./byteman/stream_5s_sleep.btm'])
 
 # Bootstraping a new node with very small 
streaming_socket_timeout_in_ms
 node2 = new_node(cluster)
@@ -174,7 +179,7 @@ class TestBootstrap(BaseBootstrapTest):
 assert_bootstrap_state(self, node2, 'COMPLETED')
 
 for node in cluster.nodelist():
-self.assertTrue(node.grep_log('Scheduling keep-alive task with 1s 
period.', filename='debug.log'))
+self.assertTrue(node.grep_log('Scheduling keep-alive task with 2s 
period.', filename='debug.log'))
 self.assertTrue(node.grep_log('Sending keep-alive', 
filename='debug.log'))
 self.assertTrue(node.grep_log('Received keep-alive', 
filename='debug.log'))
 

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/737aab2f/byteman/stream_5s_sleep.btm
--
diff --git a/byteman/stream_5s_sleep.btm b/byteman/stream_5s_sleep.btm
new file mode 100644
index 000..0a4919a
--- /dev/null
+++ b/byteman/stream_5s_sleep.btm
@@ -0,0 +1,13 @@
+#
+# Sleep 5s during streaming session
+#
+RULE sleep 10s on stream session
+CLASS org.apache.cassandra.streaming.StreamSession
+METHOD messageReceived
+AT ENTRY
+# set flag to only run this rule once.
+IF NOT flagged("done")
+DO
+   flag("done");
+   Thread.sleep(5000)
+ENDRULE


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[44/50] cassandra-dtest git commit: CASSANDRA-10130 (#1486)

2017-07-12 Thread zznate
CASSANDRA-10130 (#1486)

* Add test case for CASSANDRA-10130

* Address comments by @sbtourist

* Add more tests for index status management

* Ad missed `@staticmethod` annotation

* Add @since annotations for 4.0

* Update failing index build failures

* Fix code style removing trailing whitespaces and blank lines with whitespaces


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/50e1e7b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/50e1e7b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/50e1e7b1

Branch: refs/heads/master
Commit: 50e1e7b13a1eef3e9347aee7806dc40569ab17ad
Parents: 6847bc1
Author: Andrés de la Peña 
Authored: Mon Jun 26 13:18:55 2017 +0100
Committer: Philip Thompson 
Committed: Mon Jun 26 14:18:55 2017 +0200

--
 byteman/index_build_failure.btm|  13 +++
 secondary_indexes_test.py  | 174 +---
 sstable_generation_loading_test.py | 122 +-
 3 files changed, 271 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/50e1e7b1/byteman/index_build_failure.btm
--
diff --git a/byteman/index_build_failure.btm b/byteman/index_build_failure.btm
new file mode 100644
index 000..8f5183d
--- /dev/null
+++ b/byteman/index_build_failure.btm
@@ -0,0 +1,13 @@
+#
+# Sleep 5s during index update
+#
+RULE fail during index building
+CLASS org.apache.cassandra.db.compaction.CompactionManager
+METHOD submitIndexBuild
+AT ENTRY
+# set flag to only run this rule once.
+IF NOT flagged("done")
+DO
+   flag("done");
+   throw new java.lang.RuntimeException("Index building failure")
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/50e1e7b1/secondary_indexes_test.py
--
diff --git a/secondary_indexes_test.py b/secondary_indexes_test.py
index b73e94d..1edd30e 100644
--- a/secondary_indexes_test.py
+++ b/secondary_indexes_test.py
@@ -13,7 +13,7 @@ from cassandra.query import BatchStatement, SimpleStatement
 
 from dtest import (DISABLE_VNODES, OFFHEAP_MEMTABLES, DtestTimeoutError,
Tester, debug, CASSANDRA_VERSION_FROM_BUILD, create_ks, 
create_cf)
-from tools.assertions import assert_bootstrap_state, assert_invalid, 
assert_one, assert_row_count
+from tools.assertions import assert_bootstrap_state, assert_invalid, 
assert_none, assert_one, assert_row_count
 from tools.data import index_is_built, rows_to_list
 from tools.decorators import since
 from tools.misc import new_node
@@ -21,6 +21,16 @@ from tools.misc import new_node
 
 class TestSecondaryIndexes(Tester):
 
+@staticmethod
+def _index_sstables_files(node, keyspace, table, index):
+files = []
+for data_dir in node.data_directories():
+data_dir = os.path.join(data_dir, keyspace)
+base_tbl_dir = os.path.join(data_dir, [s for s in 
os.listdir(data_dir) if s.startswith(table)][0])
+index_sstables_dir = os.path.join(base_tbl_dir, '.' + index)
+files.extend(os.listdir(index_sstables_dir))
+return set(files)
+
 def data_created_before_index_not_returned_in_where_query_test(self):
 """
 @jira_ticket CASSANDRA-3367
@@ -307,14 +317,7 @@ class TestSecondaryIndexes(Tester):
 
 stmt = session.prepare('select * from standard1 where "C0" = ?')
 self.assertEqual(1, len(list(session.execute(stmt, [lookup_value]
-before_files = []
-index_sstables_dirs = []
-for data_dir in node1.data_directories():
-data_dir = os.path.join(data_dir, 'keyspace1')
-base_tbl_dir = os.path.join(data_dir, [s for s in 
os.listdir(data_dir) if s.startswith("standard1")][0])
-index_sstables_dir = os.path.join(base_tbl_dir, '.ix_c0')
-before_files.extend(os.listdir(index_sstables_dir))
-index_sstables_dirs.append(index_sstables_dir)
+before_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
 
 node1.nodetool("rebuild_index keyspace1 standard1 ix_c0")
 start = time.time()
@@ -326,15 +329,160 @@ class TestSecondaryIndexes(Tester):
 else:
 raise DtestTimeoutError()
 
-after_files = []
-for index_sstables_dir in index_sstables_dirs:
-after_files.extend(os.listdir(index_sstables_dir))
-self.assertNotEqual(set(before_files), set(after_files))
+after_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
+self.assertNotEqual(before_files, after_files)
 

[18/50] cassandra-dtest git commit: Fix timeouts at TestAuthUpgrade.upgrade_to_22_test and TestAuthUpgrade.upgrade_to_30_test

2017-07-12 Thread zznate
Fix timeouts at TestAuthUpgrade.upgrade_to_22_test and 
TestAuthUpgrade.upgrade_to_30_test


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/704c7b06
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/704c7b06
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/704c7b06

Branch: refs/heads/master
Commit: 704c7b062d263c3b646bbb7d7cbd967279d8a31c
Parents: 864f956
Author: adelapena 
Authored: Mon Apr 3 13:20:23 2017 +0100
Committer: Philip Thompson 
Committed: Thu Apr 6 11:54:53 2017 -0400

--
 tools/assertions.py   |  5 +++--
 upgrade_internal_auth_test.py | 12 +++-
 2 files changed, 10 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/704c7b06/tools/assertions.py
--
diff --git a/tools/assertions.py b/tools/assertions.py
index b7e5d09..2e88067 100644
--- a/tools/assertions.py
+++ b/tools/assertions.py
@@ -147,7 +147,7 @@ def assert_none(session, query, cl=None):
 assert list_res == [], "Expected nothing from {}, but got 
{}".format(query, list_res)
 
 
-def assert_all(session, query, expected, cl=None, ignore_order=False):
+def assert_all(session, query, expected, cl=None, ignore_order=False, 
timeout=None):
 """
 Assert query returns all expected items optionally in the correct order
 @param session Session in use
@@ -155,13 +155,14 @@ def assert_all(session, query, expected, cl=None, 
ignore_order=False):
 @param expected Expected results from query
 @param cl Optional Consistency Level setting. Default ONE
 @param ignore_order Optional boolean flag determining whether response is 
ordered
+@param timeout Optional query timeout, in seconds
 
 Examples:
 assert_all(session, "LIST USERS", [['aleksey', False], ['cassandra', 
True]])
 assert_all(self.session1, "SELECT * FROM ttl_table;", [[1, 42, 1, 1]])
 """
 simple_query = SimpleStatement(query, consistency_level=cl)
-res = session.execute(simple_query)
+res = session.execute(simple_query) if timeout is None else 
session.execute(simple_query, timeout=timeout)
 list_res = _rows_to_list(res)
 if ignore_order:
 expected = sorted(expected)

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/704c7b06/upgrade_internal_auth_test.py
--
diff --git a/upgrade_internal_auth_test.py b/upgrade_internal_auth_test.py
index ed34452..2c5681d 100644
--- a/upgrade_internal_auth_test.py
+++ b/upgrade_internal_auth_test.py
@@ -145,9 +145,9 @@ class TestAuthUpgrade(Tester):
 
 # we should now be able to drop the old auth tables
 session = self.patient_cql_connection(node1, user='cassandra', 
password='cassandra')
-session.execute('DROP TABLE system_auth.users')
-session.execute('DROP TABLE system_auth.credentials')
-session.execute('DROP TABLE system_auth.permissions')
+session.execute('DROP TABLE system_auth.users', timeout=60)
+session.execute('DROP TABLE system_auth.credentials', timeout=60)
+session.execute('DROP TABLE system_auth.permissions', timeout=60)
 # and we should still be able to authenticate and check authorization
 self.check_permissions(node1, True)
 debug('Test completed successfully')
@@ -163,12 +163,14 @@ class TestAuthUpgrade(Tester):
 assert_all(klaus,
'LIST ALL PERMISSIONS',
[['michael', '', 'MODIFY'],
-['michael', '', 'SELECT']])
+['michael', '', 'SELECT']],
+   timeout=60)
 else:
 assert_all(klaus,
'LIST ALL PERMISSIONS',
[['michael', 'michael', '', 'MODIFY'],
-['michael', 'michael', '', 'SELECT']])
+['michael', 'michael', '', 'SELECT']],
+   timeout=60)
 
 klaus.cluster.shutdown()
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[21/50] cassandra-dtest git commit: Create test for restoring a snapshot with dropped columns (CASSANDRA-13276)

2017-07-12 Thread zznate
Create test for restoring a snapshot with dropped columns (CASSANDRA-13276)




Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/e6b47064
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/e6b47064
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/e6b47064

Branch: refs/heads/master
Commit: e6b47064237ce4d9dc10313995fba34cb9cdefb7
Parents: 0692e2b
Author: Andrés de la Peña 
Authored: Tue Apr 25 20:38:53 2017 +0100
Committer: GitHub 
Committed: Tue Apr 25 20:38:53 2017 +0100

--
 snapshot_test.py | 43 +++
 1 file changed, 43 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/e6b47064/snapshot_test.py
--
diff --git a/snapshot_test.py b/snapshot_test.py
index ba184ee..7169a7c 100644
--- a/snapshot_test.py
+++ b/snapshot_test.py
@@ -9,6 +9,7 @@ from cassandra.concurrent import execute_concurrent_with_args
 
 from dtest import (Tester, cleanup_cluster, create_ccm_cluster, create_ks,
debug, get_test_path)
+from tools.assertions import assert_one
 from tools.files import replace_in_file, safe_mkdtemp
 from tools.hacks import advance_to_next_cl_segment
 from tools.misc import ImmutableMapping
@@ -70,6 +71,13 @@ class SnapshotTester(Tester):
 raise Exception("sstableloader command '%s' failed; exit 
status: %d'; stdout: %s; stderr: %s" %
 (" ".join(args), exit_status, stdout, 
stderr))
 
+def restore_snapshot_schema(self, snapshot_dir, node, ks, cf):
+debug("Restoring snapshot schema")
+for x in xrange(0, self.cluster.data_dir_count):
+schema_path = os.path.join(snapshot_dir, str(x), ks, cf, 
'schema.cql')
+if os.path.exists(schema_path):
+node.run_cqlsh(cmds="SOURCE '%s'" % schema_path)
+
 
 class TestSnapshot(SnapshotTester):
 
@@ -106,6 +114,41 @@ class TestSnapshot(SnapshotTester):
 
 self.assertEqual(rows[0][0], 100)
 
+def test_snapshot_and_restore_dropping_a_column(self):
+"""
+@jira_ticket CASSANDRA-13276
+
+Can't load snapshots of tables with dropped columns.
+"""
+cluster = self.cluster
+cluster.populate(1).start()
+node1, = cluster.nodelist()
+session = self.patient_cql_connection(node1)
+
+# Create schema and insert some data
+create_ks(session, 'ks', 1)
+session.execute("CREATE TABLE ks.cf (k int PRIMARY KEY, a text, b 
text)")
+session.execute("INSERT INTO ks.cf (k, a, b) VALUES (1, 'a', 'b')")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
+
+# Drop a column
+session.execute("ALTER TABLE ks.cf DROP b")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
+
+# Take a snapshot and drop the table
+snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
+session.execute("DROP TABLE ks.cf")
+
+# Restore schema and data from snapshot
+self.restore_snapshot_schema(snapshot_dir, node1, 'ks', 'cf')
+self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
+node1.nodetool('refresh ks cf')
+assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
+
+# Clean up
+debug("removing snapshot_dir: " + snapshot_dir)
+shutil.rmtree(snapshot_dir)
+
 
 class TestArchiveCommitlog(SnapshotTester):
 cluster_options = ImmutableMapping({'commitlog_segment_size_in_mb': 1})


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[38/50] cassandra-dtest git commit: CASSANDRA-9143 change needs dtest change

2017-07-12 Thread zznate
CASSANDRA-9143 change needs dtest change

CASSANDRA-9143 introduced strict check for not allowing
incremental subrange repair.
dtest needs to pass `-full` to repair command to invoke subrange
repair.


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/f69ced02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/f69ced02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/f69ced02

Branch: refs/heads/master
Commit: f69ced02b99ecc641b4b8eb149d12afe97e6f100
Parents: dfeb5df
Author: Yuki Morishita 
Authored: Tue Mar 28 14:40:57 2017 +0900
Committer: Joel Knighton 
Committed: Mon Jun 19 15:34:24 2017 -0500

--
 repair_tests/repair_test.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/f69ced02/repair_tests/repair_test.py
--
diff --git a/repair_tests/repair_test.py b/repair_tests/repair_test.py
index 5276556..ad46d18 100644
--- a/repair_tests/repair_test.py
+++ b/repair_tests/repair_test.py
@@ -1047,9 +1047,9 @@ class TestRepair(BaseRepairTest):
 node2.stop(wait_other_notice=True)
 node1.stress(['write', 'n=1M', 'no-warmup', '-schema', 
'replication(factor=3)', '-rate', 'threads=30'])
 node2.start(wait_for_binary_proto=True)
-t1 = threading.Thread(target=node1.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node3.initial_token), 
str(node1.initial_token)),))
-t2 = threading.Thread(target=node2.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node1.initial_token), 
str(node2.initial_token)),))
-t3 = threading.Thread(target=node3.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node2.initial_token), 
str(node3.initial_token)),))
+t1 = threading.Thread(target=node1.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node3.initial_token), 
str(node1.initial_token)),))
+t2 = threading.Thread(target=node2.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node1.initial_token), 
str(node2.initial_token)),))
+t3 = threading.Thread(target=node3.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node2.initial_token), 
str(node3.initial_token)),))
 t1.start()
 t2.start()
 t3.start()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[33/50] cassandra-dtest git commit: Bump CCM version to 2.6.3

2017-07-12 Thread zznate
Bump CCM version to 2.6.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6f7caba9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6f7caba9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6f7caba9

Branch: refs/heads/master
Commit: 6f7caba9c59daa949e67efc28f75e7de4c5b9fa7
Parents: 7f3566a
Author: Joel Knighton 
Authored: Thu Jun 1 14:00:50 2017 -0500
Committer: Philip Thompson 
Committed: Fri Jun 2 11:49:37 2017 +0200

--
 requirements.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/6f7caba9/requirements.txt
--
diff --git a/requirements.txt b/requirements.txt
index 40fb0e1..9be7094 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,7 +4,7 @@
 futures
 six
 -e 
git+https://github.com/datastax/python-driver.git@cassandra-test#egg=cassandra-driver
-ccm==2.6.0
+ccm==2.6.3
 cql
 decorator
 docopt


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[19/50] cassandra-dtest git commit: added tests for mutual auth (require_client_auth) on internode connections

2017-07-12 Thread zznate
added tests for mutual auth (require_client_auth) on internode connections


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/8513c478
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/8513c478
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/8513c478

Branch: refs/heads/master
Commit: 8513c4784fb9b7bcf54118f0f5b173c93b62978c
Parents: 704c7b0
Author: Jason Brown 
Authored: Thu Apr 6 06:25:34 2017 -0700
Committer: Philip Thompson 
Committed: Thu Apr 6 15:13:11 2017 -0400

--
 sslnodetonode_test.py | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/8513c478/sslnodetonode_test.py
--
diff --git a/sslnodetonode_test.py b/sslnodetonode_test.py
index a2a3e41..c4a9184 100644
--- a/sslnodetonode_test.py
+++ b/sslnodetonode_test.py
@@ -10,6 +10,7 @@ from tools.decorators import since
 _LOG_ERR_SIG = "^javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: Certificate signature validation 
failed$"
 _LOG_ERR_IP = "^javax.net.ssl.SSLHandshakeException: 
java.security.cert.CertificateException: No subject alternative names matching 
IP address [0-9.]+ found$"
 _LOG_ERR_HOST = "^javax.net.ssl.SSLHandshakeException: 
java.security.cert.CertificateException: No name matching \S+ found$"
+_LOG_ERR_CERT = "^javax.net.ssl.SSLHandshakeException: Received fatal alert: 
certificate_unknown$"
 
 
 @since('3.6')
@@ -56,6 +57,40 @@ class TestNodeToNodeSSLEncryption(Tester):
 self.cluster.stop()
 self.assertTrue(found)
 
+def ssl_client_auth_required_fail_test(self):
+"""peers need to perform mutual auth (cient auth required), but do not 
supply the local cert"""
+
+credNode1 = sslkeygen.generate_credentials("127.0.0.1")
+credNode2 = sslkeygen.generate_credentials("127.0.0.2")
+
+self.setup_nodes(credNode1, credNode2, client_auth=True)
+
+self.allow_log_errors = True
+self.cluster.start(no_wait=True)
+time.sleep(2)
+
+found = self._grep_msg(self.node1, _LOG_ERR_CERT)
+self.assertTrue(found)
+
+found = self._grep_msg(self.node2, _LOG_ERR_CERT)
+self.assertTrue(found)
+
+self.cluster.stop()
+self.assertTrue(found)
+
+def ssl_client_auth_required_succeed_test(self):
+"""peers need to perform mutual auth (cient auth required), but do not 
supply the loca cert"""
+
+credNode1 = sslkeygen.generate_credentials("127.0.0.1")
+credNode2 = sslkeygen.generate_credentials("127.0.0.2", 
credNode1.cakeystore, credNode1.cacert)
+sslkeygen.import_cert(credNode1.basedir, 'ca127.0.0.2', 
credNode2.cacert, credNode1.cakeystore)
+sslkeygen.import_cert(credNode2.basedir, 'ca127.0.0.1', 
credNode1.cacert, credNode2.cakeystore)
+
+self.setup_nodes(credNode1, credNode2, client_auth=True)
+
+self.cluster.start()
+self.cql_connection(self.node1)
+
 def ca_mismatch_test(self):
 """CA mismatch should cause nodes to fail to connect"""
 
@@ -88,7 +123,7 @@ class TestNodeToNodeSSLEncryption(Tester):
 
 return False
 
-def setup_nodes(self, credentials1, credentials2, 
endpointVerification=False):
+def setup_nodes(self, credentials1, credentials2, 
endpointVerification=False, client_auth=False):
 
 cluster = self.cluster
 
@@ -107,7 +142,8 @@ class TestNodeToNodeSSLEncryption(Tester):
 'keystore_password': 'cassandra',
 'truststore': tspath,
 'truststore_password': 'cassandra',
-'require_endpoint_verification': endpointVerification
+'require_endpoint_verification': endpointVerification,
+'require_client_auth': client_auth
 }
 })
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[36/50] cassandra-dtest git commit: Reformated git install formatting from, 'git:cassandra-2.2', to, 'github:apache/cassandra-2.2'.

2017-07-12 Thread zznate
Reformated git install formatting from, 'git:cassandra-2.2', to, 
'github:apache/cassandra-2.2'.


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/c93bd487
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/c93bd487
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/c93bd487

Branch: refs/heads/master
Commit: c93bd48712f32aaff475bc3265968b36c6665229
Parents: ef84f76
Author: MichaelHamm 
Authored: Fri Jun 9 14:21:42 2017 -0700
Committer: Philip Thompson 
Committed: Mon Jun 12 13:02:54 2017 +0200

--
 mixed_version_test.py|  4 ++--
 offline_tools_test.py| 14 +++---
 upgrade_crc_check_chance_test.py |  2 +-
 upgrade_internal_auth_test.py|  6 +++---
 4 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c93bd487/mixed_version_test.py
--
diff --git a/mixed_version_test.py b/mixed_version_test.py
index f60584a..9da28b9 100644
--- a/mixed_version_test.py
+++ b/mixed_version_test.py
@@ -21,9 +21,9 @@ class TestSchemaChanges(Tester):
 node1, node2 = cluster.nodelist()
 original_version = node1.get_cassandra_version()
 if original_version.vstring.startswith('2.0'):
-upgraded_version = 'git:cassandra-2.1'
+upgraded_version = 'github:apache/cassandra-2.1'
 elif original_version.vstring.startswith('2.1'):
-upgraded_version = 'git:cassandra-2.2'
+upgraded_version = 'github:apache/cassandra-2.2'
 else:
 self.skip("This test is only designed to work with 2.0 and 2.1 
right now")
 

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c93bd487/offline_tools_test.py
--
diff --git a/offline_tools_test.py b/offline_tools_test.py
index 3f9c2b7..c0a0010 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -329,19 +329,19 @@ class TestOfflineTools(Tester):
 # CCM doesn't handle this upgrade correctly and results in an 
error when flushing 2.1:
 #   Error opening zip file or JAR manifest missing : 
/home/mshuler/git/cassandra/lib/jamm-0.2.5.jar
 # The 2.1 installed jamm version is 0.3.0, but bin/cassandra.in.sh 
used by nodetool still has 0.2.5
-# (when this is fixed in CCM issue #463, install 
version='git:cassandra-2.0' as below)
+# (when this is fixed in CCM issue #463, install 
version='github:apache/cassandra-2.0' as below)
 self.skipTest('Skipping 2.1 test due to jamm.jar version upgrade 
problem in CCM node configuration.')
 elif testversion < '3.0':
-debug('Test version: {} - installing 
git:cassandra-2.1'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-2.1')
+debug('Test version: {} - installing 
github:apache/cassandra-2.1'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-2.1')
 # As of 3.5, sstable format 'ma' from 3.0 is still the latest - 
install 2.2 to upgrade from
 elif testversion < '4.0':
-debug('Test version: {} - installing 
git:cassandra-2.2'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-2.2')
+debug('Test version: {} - installing 
github:apache/cassandra-2.2'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-2.2')
 # From 4.0, one can only upgrade from 3.0
 else:
-debug('Test version: {} - installing 
git:cassandra-3.0'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-3.0')
+debug('Test version: {} - installing 
github:apache/cassandra-3.0'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-3.0')
 
 # Start up last major version, write out an sstable to upgrade, and 
stop node
 cluster.populate(1).start(wait_for_binary_proto=True)

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c93bd487/upgrade_crc_check_chance_test.py
--
diff --git a/upgrade_crc_check_chance_test.py b/upgrade_crc_check_chance_test.py
index 0367104..ec758c2 100644
--- a/upgrade_crc_check_chance_test.py
+++ b/upgrade_crc_check_chance_test.py
@@ -25,7 +25,7 @@ class TestCrcCheckChanceUpgrade(Tester):
 cluster = self.cluster
 
 # Forcing cluster version on purpose
-cluster.set_install_dir(version="git:cassandra-2.2")
+

[47/50] cassandra-dtest git commit: Add tests for 'nodetool getbatchlogreplaythrottle' and 'nodetool setbatchlogreplaythrottle' (#1491)

2017-07-12 Thread zznate
Add tests for 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (#1491)

* Add test for 'nodetool setbatchlogreplaythrottlekb'

* Check log messages about updates in batchlog replay throttle

* Add test for 'nodetool getbatchlogreplaythrottlekb'

* Adapt tests for the renaming of the nodetool accessors for batchlog replay 
throttle

* Remove unused imports

* Removed extra blank line at the end of the file


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/8cd52d67
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/8cd52d67
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/8cd52d67

Branch: refs/heads/master
Commit: 8cd52d67587ddb5efc80366ff6c6a044c30b41d3
Parents: 557ab7b
Author: Andrés de la Peña 
Authored: Thu Jul 6 12:26:10 2017 +0100
Committer: GitHub 
Committed: Thu Jul 6 12:26:10 2017 +0100

--
 jmx_test.py  | 21 +
 nodetool_test.py | 22 ++
 2 files changed, 43 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/8cd52d67/jmx_test.py
--
diff --git a/jmx_test.py b/jmx_test.py
index 7251b12..7df84ac 100644
--- a/jmx_test.py
+++ b/jmx_test.py
@@ -181,6 +181,27 @@ class TestJMX(Tester):
 self.assertGreater(endpoint2Phi, 0.0)
 self.assertLess(endpoint2Phi, max_phi)
 
+@since('4.0')
+def test_set_get_batchlog_replay_throttle(self):
+"""
+@jira_ticket CASSANDRA-13614
+
+Test that batchlog replay throttle can be set and get through JMX
+"""
+cluster = self.cluster
+cluster.populate(2)
+node = cluster.nodelist()[0]
+remove_perf_disable_shared_mem(node)
+cluster.start()
+
+# Set and get throttle with JMX, ensuring that the rate change is 
logged
+with JolokiaAgent(node) as jmx:
+mbean = make_mbean('db', 'StorageService')
+jmx.write_attribute(mbean, 'BatchlogReplayThrottleInKB', 4096)
+self.assertTrue(len(node.grep_log('Updating batchlog replay 
throttle to 4096 KB/s, 2048 KB/s per endpoint',
+  filename='debug.log')) > 0)
+self.assertEqual(4096, jmx.read_attribute(mbean, 
'BatchlogReplayThrottleInKB'))
+
 
 @since('3.9')
 class TestJMXSSL(Tester):

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/8cd52d67/nodetool_test.py
--
diff --git a/nodetool_test.py b/nodetool_test.py
index ff4622b..d7ce89a 100644
--- a/nodetool_test.py
+++ b/nodetool_test.py
@@ -136,3 +136,25 @@ class TestNodetool(Tester):
 out, err, _ = node.nodetool('status')
 self.assertEqual(0, len(err), err)
 self.assertRegexpMatches(out, notice_message)
+
+@since('4.0')
+def test_set_get_batchlog_replay_throttle(self):
+"""
+@jira_ticket CASSANDRA-13614
+
+Test that batchlog replay throttle can be set and get through nodetool
+"""
+cluster = self.cluster
+cluster.populate(2)
+node = cluster.nodelist()[0]
+cluster.start()
+
+# Test that nodetool help messages are displayed
+self.assertTrue('Set batchlog replay throttle' in node.nodetool('help 
setbatchlogreplaythrottle').stdout)
+self.assertTrue('Print batchlog replay throttle' in 
node.nodetool('help getbatchlogreplaythrottle').stdout)
+
+# Set and get throttle with nodetool, ensuring that the rate change is 
logged
+node.nodetool('setbatchlogreplaythrottle 2048')
+self.assertTrue(len(node.grep_log('Updating batchlog replay throttle 
to 2048 KB/s, 1024 KB/s per endpoint',
+  filename='debug.log')) > 0)
+self.assertTrue('Batchlog replay throttle: 2048 KB/s' in 
node.nodetool('getbatchlogreplaythrottle').stdout)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[29/50] cassandra-dtest git commit: Add upgrade test for old format indexed sstables (CASSANDRA-13236)

2017-07-12 Thread zznate
Add upgrade test for old format indexed sstables (CASSANDRA-13236)


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/f1489423
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/f1489423
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/f1489423

Branch: refs/heads/master
Commit: f1489423113713d04a1ef1a2bd4e9160abaea4b1
Parents: 5c99d20
Author: Sam Tunnicliffe 
Authored: Thu May 4 18:04:56 2017 -0700
Committer: Philip Thompson 
Committed: Thu May 11 14:31:57 2017 -0400

--
 upgrade_tests/storage_engine_upgrade_test.py | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/f1489423/upgrade_tests/storage_engine_upgrade_test.py
--
diff --git a/upgrade_tests/storage_engine_upgrade_test.py 
b/upgrade_tests/storage_engine_upgrade_test.py
index ac578dc..aa1cc27 100644
--- a/upgrade_tests/storage_engine_upgrade_test.py
+++ b/upgrade_tests/storage_engine_upgrade_test.py
@@ -215,12 +215,18 @@ class TestStorageEngineUpgrade(Tester):
 assert_one(session, "SELECT * FROM t WHERE k = {}".format(n), [n, 
n + 1, n + 2, n + 3, n + 4])
 
 def upgrade_with_statics_test(self):
+self.upgrade_with_statics(rows=10)
+
+def upgrade_with_wide_partition_and_statics_test(self):
+""" Checks we read old indexed sstables with statics by creating 
partitions larger than a single index block"""
+self.upgrade_with_statics(rows=1000)
+
+def upgrade_with_statics(self, rows):
 """
 Validates we can read legacy sstables with static columns.
 """
 PARTITIONS = 1
-ROWS = 10
-
+ROWS = rows
 session = self._setup_cluster()
 
 session.execute('CREATE TABLE t (k int, s1 int static, s2 int static, 
t int, v1 int, v2 int, PRIMARY KEY (k, t))')


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[40/50] cassandra-dtest git commit: Merge pull request #1484 from jkni/since-size-estimates

2017-07-12 Thread zznate
Merge pull request #1484 from jkni/since-size-estimates

Restrict size estimates multi-dc test to run on 3.0.11+

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/93aa3147
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/93aa3147
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/93aa3147

Branch: refs/heads/master
Commit: 93aa3147a5feced7eb0cc4cfb852e8a67f9251e9
Parents: f69ced0 3cf276e
Author: Paulo Ricardo Motta Gomes 
Authored: Mon Jun 19 21:47:28 2017 -0500
Committer: GitHub 
Committed: Mon Jun 19 21:47:28 2017 -0500

--
 topology_test.py | 1 +
 1 file changed, 1 insertion(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[37/50] cassandra-dtest git commit: Repair preview tests should only run on 4.0+

2017-07-12 Thread zznate
Repair preview tests should only run on 4.0+


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/dfeb5dfb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/dfeb5dfb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/dfeb5dfb

Branch: refs/heads/master
Commit: dfeb5dfb2930b1b9d236d1fa4ac159db53c1f60a
Parents: c93bd48
Author: Joel Knighton 
Authored: Fri Jun 16 14:53:32 2017 -0500
Committer: Philip Thompson 
Committed: Sat Jun 17 17:09:34 2017 +0200

--
 repair_tests/preview_repair_test.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/dfeb5dfb/repair_tests/preview_repair_test.py
--
diff --git a/repair_tests/preview_repair_test.py 
b/repair_tests/preview_repair_test.py
index e888a9b..86627ab 100644
--- a/repair_tests/preview_repair_test.py
+++ b/repair_tests/preview_repair_test.py
@@ -4,9 +4,10 @@ from cassandra import ConsistencyLevel
 from cassandra.query import SimpleStatement
 
 from dtest import Tester
-from tools.decorators import no_vnodes
+from tools.decorators import no_vnodes, since
 
 
+@since('4.0')
 class PreviewRepairTest(Tester):
 
 def assert_no_repair_history(self, session):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/50] cassandra-dtest git commit: Added test case for CASSANDRA-13364

2017-07-12 Thread zznate
Added test case for CASSANDRA-13364


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/ec6b9581
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/ec6b9581
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/ec6b9581

Branch: refs/heads/master
Commit: ec6b9581eb8c79edc0cde2f5ad045e6951acfa43
Parents: 795d91c
Author: Stefania Alborghetti 
Authored: Wed Mar 29 11:20:53 2017 +0800
Committer: Stefania Alborghetti 
Committed: Wed Mar 29 11:20:53 2017 +0800

--
 cqlsh_tests/cqlsh_copy_tests.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/ec6b9581/cqlsh_tests/cqlsh_copy_tests.py
--
diff --git a/cqlsh_tests/cqlsh_copy_tests.py b/cqlsh_tests/cqlsh_copy_tests.py
index d4a49fa..43d33db 100644
--- a/cqlsh_tests/cqlsh_copy_tests.py
+++ b/cqlsh_tests/cqlsh_copy_tests.py
@@ -243,6 +243,7 @@ class CqlshCopyTest(Tester):
 u frozen>,
 v frozen,set>>,
 w frozen>,
+x map>
 )''')
 
 default_time_format = self.default_time_format
@@ -337,7 +338,8 @@ class CqlshCopyTest(Tester):
  # first set is contained in the second set or else they 
will not sort consistently
  # and this will cause comparison problems when comparing 
with csv strings therefore failing
  # some tests
- ImmutableSet([ImmutableSet(['127.0.0.1']), 
ImmutableSet(['127.0.0.1', '127.0.0.2'])])
+ ImmutableSet([ImmutableSet(['127.0.0.1']), 
ImmutableSet(['127.0.0.1', '127.0.0.2'])]),
+ {'key1': ['value1', 'value2']}  # map>
  )
 
 @contextmanager
@@ -1727,8 +1729,8 @@ class CqlshCopyTest(Tester):
 self.all_datatypes_prepare()
 
 insert_statement = self.session.prepare(
-"""INSERT INTO testdatatype (a, b, c, d, e, f, g, h, i, j, k, l, 
m, n, o, p, q, r, s, t, u, v, w)
-VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?)""")
+"""INSERT INTO testdatatype (a, b, c, d, e, f, g, h, i, j, k, l, 
m, n, o, p, q, r, s, t, u, v, w, x)
+VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?, ?)""")
 self.session.execute(insert_statement, self.data)
 
 def _test(prepared_statements):
@@ -1796,8 +1798,8 @@ class CqlshCopyTest(Tester):
 self.all_datatypes_prepare()
 
 insert_statement = self.session.prepare(
-"""INSERT INTO testdatatype (a, b, c, d, e, f, g, h, i, j, k, l, 
m, n, o, p, q, r, s, t, u, v, w)
-VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?)""")
+"""INSERT INTO testdatatype (a, b, c, d, e, f, g, h, i, j, k, l, 
m, n, o, p, q, r, s, t, u, v, w, x)
+VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?, ?)""")
 self.session.execute(insert_statement, self.data)
 
 tempfile = self.get_temp_file()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[24/50] cassandra-dtest git commit: adding cluster reconfiguration tests for 9143 (#1468)

2017-07-12 Thread zznate
adding cluster reconfiguration tests for 9143 (#1468)

* adding cluster reconfiguration tests for 9143

* fixing blank line

* fixing whitespace issue


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/dc8cb3fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/dc8cb3fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/dc8cb3fb

Branch: refs/heads/master
Commit: dc8cb3fb12cad229131b57eb789e41246a108924
Parents: d5c413c
Author: Blake Eggleston 
Authored: Tue May 9 16:33:08 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 9 19:33:08 2017 -0400

--
 repair_tests/incremental_repair_test.py | 113 ++-
 1 file changed, 112 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/dc8cb3fb/repair_tests/incremental_repair_test.py
--
diff --git a/repair_tests/incremental_repair_test.py 
b/repair_tests/incremental_repair_test.py
index 270e1fa..a447d56 100644
--- a/repair_tests/incremental_repair_test.py
+++ b/repair_tests/incremental_repair_test.py
@@ -14,7 +14,8 @@ from nose.plugins.attrib import attr
 from dtest import Tester, debug, create_ks, create_cf
 from tools.assertions import assert_almost_equal, assert_one
 from tools.data import insert_c1c2
-from tools.decorators import since
+from tools.decorators import since, no_vnodes
+from tools.misc import new_node
 
 
 class ConsistentState(object):
@@ -647,3 +648,113 @@ class TestIncRepair(Tester):
 
 for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
node in cluster.nodelist() if len(node.get_sstables('keyspace1', 'standard1')) 
> 0):
 self.assertNotIn('Repaired at: 0', out)
+
+@no_vnodes()
+@since('4.0')
+def move_test(self):
+""" Test repaired data remains in sync after a move """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(4, tokens=[0, 2**32, 2**48, -(2**32)]).start()
+node1, node2, node3, node4 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 2}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# insert some data
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+for i in range(1000):
+session.execute(stmt, (i, i))
+
+node1.repair(options=['ks'])
+
+for i in range(1000):
+v = i + 1000
+session.execute(stmt, (v, v))
+
+# everything should be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+node2.nodetool('move {}'.format(2**16))
+
+# everything should still be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+@no_vnodes()
+@since('4.0')
+def decommission_test(self):
+""" Test repaired data remains in sync after a decommission """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(4).start()
+node1, node2, node3, node4 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 2}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# insert some data
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+for i in range(1000):
+session.execute(stmt, (i, i))
+
+node1.repair(options=['ks'])
+
+for i in range(1000):
+v = i + 1000
+session.execute(stmt, (v, v))
+
+# everything should be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+node2.nodetool('decommission')
+
+# everything should still be in sync
+for node in [node1, node3, node4]:
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+@no_vnodes()
+

[46/50] cassandra-dtest git commit: Fix do_upgrade in batch_test.py to upgrade to the current version

2017-07-12 Thread zznate
Fix do_upgrade in batch_test.py to upgrade to the current version


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/557ab7b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/557ab7b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/557ab7b6

Branch: refs/heads/master
Commit: 557ab7b6b7c62e341b3ec9c8e7041f7731a1c0bd
Parents: d2d9e6d
Author: Philip Thompson 
Authored: Wed Jul 5 11:40:30 2017 +0200
Committer: Philip Thompson 
Committed: Wed Jul 5 11:51:36 2017 +0200

--
 batch_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/557ab7b6/batch_test.py
--
diff --git a/batch_test.py b/batch_test.py
index e67d185..4194f10 100644
--- a/batch_test.py
+++ b/batch_test.py
@@ -5,7 +5,7 @@ from unittest import skipIf
 from cassandra import ConsistencyLevel, Timeout, Unavailable
 from cassandra.query import SimpleStatement
 
-from dtest import CASSANDRA_DIR, Tester, debug, create_ks
+from dtest import Tester, create_ks, debug
 from tools.assertions import (assert_all, assert_invalid, assert_one,
   assert_unavailable)
 from tools.decorators import since
@@ -433,7 +433,7 @@ class TestBatch(Tester):
 node.watch_log_for("DRAINED")
 node.stop(wait_other_notice=False)
 
-node.set_install_dir(install_dir=CASSANDRA_DIR)
+self.set_node_to_current_version(node)
 debug("Set new cassandra dir for {}: {}".format(node.name, 
node.get_install_dir()))
 
 # Restart nodes on new version


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[26/50] cassandra-dtest git commit: Preserve DESCRIBE behaviour with quoted index names for older versions

2017-07-12 Thread zznate
Preserve DESCRIBE behaviour with quoted index names for older versions


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/afda2d45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/afda2d45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/afda2d45

Branch: refs/heads/master
Commit: afda2d45fe578359b2db51233c1f12833d8a196b
Parents: f292548
Author: Sam Tunnicliffe 
Authored: Fri Jan 20 12:40:59 2017 -0800
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/afda2d45/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index 7734848..4feadc1 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -682,7 +682,6 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.cluster.populate(1)
 self.cluster.start(wait_for_binary_proto=True)
 node1, = self.cluster.nodelist()
-
 self.execute(
 cql="""
 CREATE KEYSPACE test WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
@@ -980,10 +979,20 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
-""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+""" + self.get_index_output('QuotedNameIndex', 'test', 'users', 
'firstname') \
+ "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 
 def get_index_output(self, index, ks, table, col):
+# a quoted index name (e.g. "FooIndex") is only correctly echoed by 
DESCRIBE
+# from 3.0.11 & 3.10
+if index[0] == '"' and index[-1] == '"':
+version = self.cluster.version()
+if version >= LooseVersion('3.10'):
+pass
+elif LooseVersion('3.1') > version >= LooseVersion('3.0.11'):
+pass
+else:
+index = index[1:-1]
 return "CREATE INDEX {} ON {}.{} ({});".format(index, ks, table, col)
 
 def get_users_by_state_mv_output(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[48/50] cassandra-dtest git commit: Adds the ability to use uncompressed chunks in compressed files

2017-07-12 Thread zznate
Adds the ability to use uncompressed chunks in compressed files

Triggered when size of compressed data surpasses a configurable
threshold.

Patch by Branimir Lambov; reviewed by Ropert Stupp for CASSANDRA-10520


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/058b9528
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/058b9528
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/058b9528

Branch: refs/heads/master
Commit: 058b95289bf815495fced0ac55a78bcceceea9fa
Parents: 8cd52d6
Author: Branimir Lambov 
Authored: Tue Jan 17 16:25:07 2017 +0200
Committer: Alex Petrov 
Committed: Thu Jul 6 15:18:19 2017 +0200

--
 cqlsh_tests/cqlsh_tests.py | 44 +++--
 1 file changed, 42 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/058b9528/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index e7bc11c..dee1891 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -847,7 +847,25 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 PRIMARY KEY (id, col)
 """
 
-if self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('4.0'):
+ret += """
+) WITH CLUSTERING ORDER BY (col ASC)
+AND bloom_filter_fp_chance = 0.01
+AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
+AND comment = ''
+AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
+AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
+AND crc_check_chance = 1.0
+AND dclocal_read_repair_chance = 0.1
+AND default_time_to_live = 0
+AND gc_grace_seconds = 864000
+AND max_index_interval = 2048
+AND memtable_flush_period_in_ms = 0
+AND min_index_interval = 128
+AND read_repair_chance = 0.0
+AND speculative_retry = '99PERCENTILE';
+"""
+elif self.cluster.version() >= LooseVersion('3.9'):
 ret += """
 ) WITH CLUSTERING ORDER BY (col ASC)
 AND bloom_filter_fp_chance = 0.01
@@ -913,7 +931,29 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 return ret + "\n" + col_idx_def
 
 def get_users_table_output(self):
-if self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('4.0'):
+return """
+CREATE TABLE test.users (
+userid text PRIMARY KEY,
+age int,
+firstname text,
+lastname text
+) WITH bloom_filter_fp_chance = 0.01
+AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
+AND comment = ''
+AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
+AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
+AND crc_check_chance = 1.0
+AND dclocal_read_repair_chance = 0.1
+AND default_time_to_live = 0
+AND gc_grace_seconds = 864000
+AND max_index_interval = 2048
+AND memtable_flush_period_in_ms = 0
+AND min_index_interval = 128
+AND read_repair_chance = 0.0
+AND speculative_retry = '99PERCENTILE';
+""" + self.get_index_output('myindex', 'test', 'users', 'age')
+elif self.cluster.version() >= LooseVersion('3.9'):
 return """
 CREATE TABLE test.users (
 userid text PRIMARY KEY,


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[31/50] cassandra-dtest git commit: add test to confirm that hostname validation is working

2017-07-12 Thread zznate
add test to confirm that hostname validation is working


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/bea71d8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/bea71d8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/bea71d8f

Branch: refs/heads/master
Commit: bea71d8fd2e02777bd5c03234489ae9e0efe177e
Parents: 538d658
Author: Jason Brown 
Authored: Thu May 25 14:59:37 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 30 14:18:13 2017 +0200

--
 sslnodetonode_test.py | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/bea71d8f/sslnodetonode_test.py
--
diff --git a/sslnodetonode_test.py b/sslnodetonode_test.py
index c4a9184..a11a3f4 100644
--- a/sslnodetonode_test.py
+++ b/sslnodetonode_test.py
@@ -26,6 +26,18 @@ class TestNodeToNodeSSLEncryption(Tester):
 self.cluster.start()
 self.cql_connection(self.node1)
 
+def ssl_correct_hostname_with_validation_test(self):
+"""Should be able to start with valid ssl options"""
+
+credNode1 = sslkeygen.generate_credentials("127.0.0.1")
+credNode2 = sslkeygen.generate_credentials("127.0.0.2", 
credNode1.cakeystore, credNode1.cacert)
+
+self.setup_nodes(credNode1, credNode2, endpointVerification=True)
+self.allow_log_errors = False
+self.cluster.start()
+time.sleep(2)
+self.cql_connection(self.node1)
+
 def ssl_wrong_hostname_no_validation_test(self):
 """Should be able to start with valid ssl options"""
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/50] cassandra-dtest git commit: Add dtest for CASSANDRA-13053

2017-07-12 Thread zznate
Add dtest for CASSANDRA-13053


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/13a8ec4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/13a8ec4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/13a8ec4b

Branch: refs/heads/master
Commit: 13a8ec4ba3c9e4edfee50bac2385bd769c62c25f
Parents: eb3b574
Author: Aleksey Yeschenko 
Authored: Tue Mar 7 15:17:55 2017 +
Committer: Aleksey Yeschenko 
Committed: Wed Mar 8 00:25:14 2017 +

--
 auth_test.py | 33 +
 1 file changed, 33 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/13a8ec4b/auth_test.py
--
diff --git a/auth_test.py b/auth_test.py
index c321828..2749209 100644
--- a/auth_test.py
+++ b/auth_test.py
@@ -654,6 +654,39 @@ class TestAuth(Tester):
 rows = list(cathy.execute("TRUNCATE ks.cf"))
 self.assertItemsEqual(rows, [])
 
+@since('2.2')
+def grant_revoke_without_ks_specified_test(self):
+"""
+* Launch a one node cluster
+* Connect as the default superuser
+* Create table ks.cf
+* Create a new users, 'cathy' and 'bob', with no permissions
+* Grant ALL on ks.cf to cathy
+* As cathy, try granting SELECT on cf to bob, without specifying the 
ks; verify it fails
+* As cathy, USE ks, try again, verify it works this time
+"""
+self.prepare()
+
+cassandra = self.get_session(user='cassandra', password='cassandra')
+
+cassandra.execute("CREATE KEYSPACE ks WITH replication = 
{'class':'SimpleStrategy', 'replication_factor':1}")
+cassandra.execute("CREATE TABLE ks.cf (id int primary key, val int)")
+
+cassandra.execute("CREATE USER cathy WITH PASSWORD '12345'")
+cassandra.execute("CREATE USER bob WITH PASSWORD '12345'")
+
+cassandra.execute("GRANT ALL ON ks.cf TO cathy")
+
+cathy = self.get_session(user='cathy', password='12345')
+bob = self.get_session(user='bob', password='12345')
+
+assert_invalid(cathy, "GRANT SELECT ON cf TO bob", "No keyspace has 
been specified. USE a keyspace, or explicitly specify keyspace.tablename")
+assert_unauthorized(bob, "SELECT * FROM ks.cf", "User bob has no 
SELECT permission on  or any of its parents")
+
+cathy.execute("USE ks")
+cathy.execute("GRANT SELECT ON cf TO bob")
+bob.execute("SELECT * FROM ks.cf")
+
 def grant_revoke_auth_test(self):
 """
 * Launch a one node cluster


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[17/50] cassandra-dtest git commit: Merge pull request #1462 from pauloricardomg/fix_anticompaction_of_already_repaired_test

2017-07-12 Thread zznate
Merge pull request #1462 from 
pauloricardomg/fix_anticompaction_of_already_repaired_test

Restrict data dir for no_anticompaction_of_already_repaired_test

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/864f9562
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/864f9562
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/864f9562

Branch: refs/heads/master
Commit: 864f9562e7da70a749eb012c184f1787de5be01e
Parents: b93fd83 2e24587
Author: Paulo Ricardo Motta Gomes 
Authored: Tue Apr 4 18:15:50 2017 -0300
Committer: GitHub 
Committed: Tue Apr 4 18:15:50 2017 -0300

--
 repair_tests/repair_test.py | 2 ++
 1 file changed, 2 insertions(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[23/50] cassandra-dtest git commit: adding test for repair preview introduced in CASSANDRA-13257 (#1465)

2017-07-12 Thread zznate
adding test for repair preview introduced in CASSANDRA-13257 (#1465)

* adding test for repair preview

* review fixes


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/d5c413c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/d5c413c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/d5c413c4

Branch: refs/heads/master
Commit: d5c413c41ba174276196a8b7c5f590632c5e20be
Parents: 0667de0
Author: Blake Eggleston 
Authored: Tue May 9 12:45:10 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 9 15:45:10 2017 -0400

--
 repair_tests/preview_repair_test.py | 85 
 1 file changed, 85 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/d5c413c4/repair_tests/preview_repair_test.py
--
diff --git a/repair_tests/preview_repair_test.py 
b/repair_tests/preview_repair_test.py
new file mode 100644
index 000..e888a9b
--- /dev/null
+++ b/repair_tests/preview_repair_test.py
@@ -0,0 +1,85 @@
+import time
+
+from cassandra import ConsistencyLevel
+from cassandra.query import SimpleStatement
+
+from dtest import Tester
+from tools.decorators import no_vnodes
+
+
+class PreviewRepairTest(Tester):
+
+def assert_no_repair_history(self, session):
+rows = session.execute("select * from 
system_distributed.repair_history")
+self.assertEqual(rows.current_rows, [])
+rows = session.execute("select * from 
system_distributed.parent_repair_history")
+self.assertEqual(rows.current_rows, [])
+
+@no_vnodes()
+def preview_test(self):
+""" Test that preview correctly detects out of sync data """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(3).start()
+node1, node2, node3 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 3}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# everything should be in sync
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Previewed data was in sync", result.stdout)
+self.assert_no_repair_history(session)
+
+# make data inconsistent between nodes
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+stmt.consistency_level = ConsistencyLevel.ALL
+for i in range(10):
+session.execute(stmt, (i, i))
+node3.flush()
+time.sleep(1)
+node3.stop(gently=False)
+stmt.consistency_level = ConsistencyLevel.QUORUM
+
+session = self.exclusive_cql_connection(node1)
+for i in range(10):
+session.execute(stmt, (i + 10, i + 10))
+node1.flush()
+time.sleep(1)
+node1.stop(gently=False)
+node3.start(wait_other_notice=True, wait_for_binary_proto=True)
+session = self.exclusive_cql_connection(node2)
+for i in range(10):
+session.execute(stmt, (i + 20, i + 20))
+node1.start(wait_other_notice=True, wait_for_binary_proto=True)
+
+# data should not be in sync for full and unrepaired previews
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Total estimated streaming", result.stdout)
+self.assertNotIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--preview', '--full'])
+self.assertIn("Total estimated streaming", result.stdout)
+self.assertNotIn("Previewed data was in sync", result.stdout)
+
+# repaired data should be in sync anyway
+result = node1.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+self.assert_no_repair_history(session)
+
+# repair the data...
+node1.repair(options=['ks'])
+for node in cluster.nodelist():
+node.nodetool('compact ks tbl')
+
+# ...and everything should be in sync
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--preview', '--full'])
+self.assertIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)


-
To unsubscribe, e-mail: 

[02/50] cassandra-dtest git commit: CASSANDRA-13294

2017-07-12 Thread zznate
CASSANDRA-13294


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/eb3b5749
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/eb3b5749
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/eb3b5749

Branch: refs/heads/master
Commit: eb3b5749ef4f22460365ad8369ff668d68c5d85b
Parents: 6bb074f
Author: Marcus Eriksson 
Authored: Mon Mar 6 14:38:14 2017 +0100
Committer: Philip Thompson 
Committed: Mon Mar 6 09:56:46 2017 -0500

--
 upgrade_tests/regression_test.py | 70 +++
 upgrade_tests/upgrade_base.py| 10 +
 2 files changed, 80 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/eb3b5749/upgrade_tests/regression_test.py
--
diff --git a/upgrade_tests/regression_test.py b/upgrade_tests/regression_test.py
index 14a17f7..cff3c50 100644
--- a/upgrade_tests/regression_test.py
+++ b/upgrade_tests/regression_test.py
@@ -7,9 +7,15 @@ from cassandra import ConsistencyLevel as CL
 from nose.tools import assert_not_in
 
 from dtest import RUN_STATIC_UPGRADE_MATRIX
+from tools.jmxutils import (JolokiaAgent, make_mbean)
 from upgrade_base import UpgradeTester
 from upgrade_manifest import build_upgrade_pairs
 
+import glob
+import os
+import re
+import time
+
 
 class TestForRegressions(UpgradeTester):
 """
@@ -60,6 +66,70 @@ class TestForRegressions(UpgradeTester):
 count = s[1].execute("select count(*) from 
financial.symbol_history where symbol='{}' and year={};".format(symbol, 
year))[0][0]
 self.assertEqual(count, expected_rows, "actual {} did not 
match expected {}".format(count, expected_rows))
 
+def test13294(self):
+"""
+Tests upgrades with files having a bunch of files with the same prefix 
as another file
+
+this file is then compacted and we verify that no other sstables are 
removed
+
+@jira_ticket CASSANDRA-13294
+"""
+cluster = self.cluster
+cluster.set_datadir_count(1)  # we want the same prefix for all 
sstables
+session = self.prepare(jolokia=True)
+session.execute("CREATE KEYSPACE test13294 WITH 
replication={'class':'SimpleStrategy', 'replication_factor': 2};")
+session.execute("CREATE TABLE test13294.t (id int PRIMARY KEY, d int) 
WITH compaction = {'class': 'SizeTieredCompactionStrategy','enabled':'false'}")
+for x in xrange(0, 5):
+session.execute("INSERT INTO test13294.t (id, d) VALUES (%d, %d)" 
% (x, x))
+cluster.flush()
+
+node1 = cluster.nodelist()[0]
+
+sstables = node1.get_sstables('test13294', 't')
+node1.stop(wait_other_notice=True)
+generation_re = re.compile(r'(.*-)(\d+)(-.*)')
+mul = 1
+first_sstable = ''
+for sstable in sstables:
+res = generation_re.search(sstable)
+if res:
+glob_for = "%s%s-*" % (res.group(1), res.group(2))
+for f in glob.glob(glob_for):
+res2 = generation_re.search(f)
+new_filename = "%s%s%s" % (res2.group(1), mul, 
res2.group(3))
+os.rename(f, new_filename)
+if first_sstable == '' and '-Data' in new_filename:
+first_sstable = new_filename  # we should compact this
+mul = mul * 10
+node1.start(wait_other_notice=True)
+sessions = self.do_upgrade(session)
+checked = False
+for is_upgraded, cursor in sessions:
+if is_upgraded:
+sstables_before = self.get_all_sstables(node1)
+self.compact_sstable(node1, first_sstable)
+time.sleep(2)  # wait for sstables to get physically removed
+sstables_after = self.get_all_sstables(node1)
+# since autocompaction is disabled and we compact a single 
sstable above
+# the number of sstables after should be the same as before.
+self.assertEquals(len(sstables_before), len(sstables_after))
+checked = True
+self.assertTrue(checked)
+
+def compact_sstable(self, node, sstable):
+mbean = make_mbean('db', type='CompactionManager')
+with JolokiaAgent(node) as jmx:
+jmx.execute_method(mbean, 'forceUserDefinedCompaction', [sstable])
+
+def get_all_sstables(self, node):
+# note that node.get_sstables(...) only returns current version 
sstables
+keyspace_dirs = [os.path.join(node.get_path(), "data{0}".format(x), 
"test13294") for x in xrange(0, node.cluster.data_dir_count)]
+files = []
+for d in 

[15/50] cassandra-dtest git commit: set max_version since deprecated repair methods are removed in 4.0

2017-07-12 Thread zznate
set max_version since deprecated repair methods are removed in 4.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/b93fd831
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/b93fd831
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/b93fd831

Branch: refs/heads/master
Commit: b93fd83170cb880155208be4a5b10a1e0ba041dc
Parents: c17e011
Author: Yuki Morishita 
Authored: Fri Mar 17 17:45:10 2017 +0900
Committer: Joel Knighton 
Committed: Mon Apr 3 23:58:50 2017 -0500

--
 repair_tests/deprecated_repair_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/b93fd831/repair_tests/deprecated_repair_test.py
--
diff --git a/repair_tests/deprecated_repair_test.py 
b/repair_tests/deprecated_repair_test.py
index 6ae0d09..4b33842 100644
--- a/repair_tests/deprecated_repair_test.py
+++ b/repair_tests/deprecated_repair_test.py
@@ -11,7 +11,7 @@ from tools.jmxutils import (JolokiaAgent, make_mbean,
 remove_perf_disable_shared_mem)
 
 
-@since("2.2")
+@since("2.2", max_version="4")
 class TestDeprecatedRepairAPI(Tester):
 """
 @jira_ticket CASSANDRA-9570


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[42/50] cassandra-dtest git commit: Force decommission on topology test where required on 4.0+, run with vnodes

2017-07-12 Thread zznate
Force decommission on topology test where required on 4.0+, run with vnodes


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/c368a909
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/c368a909
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/c368a909

Branch: refs/heads/master
Commit: c368a9098a4f5c8bd476257019154bf700963294
Parents: 1cc4941
Author: Joel Knighton 
Authored: Mon Jun 19 14:53:06 2017 -0500
Committer: Philip Thompson 
Committed: Tue Jun 20 12:10:58 2017 +0200

--
 topology_test.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c368a909/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index 15827f3..45c1c73 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -351,7 +351,6 @@ class TestTopology(Tester):
 query_c1c2(session, n, ConsistencyLevel.ONE)
 
 @since('3.0')
-@no_vnodes()
 def decommissioned_node_cant_rejoin_test(self):
 '''
 @jira_ticket CASSANDRA-8801
@@ -375,7 +374,7 @@ class TestTopology(Tester):
 node1, node2, node3 = self.cluster.nodelist()
 
 debug('decommissioning...')
-node3.decommission()
+node3.decommission(force=self.cluster.version() >= '4.0')
 debug('stopping...')
 node3.stop()
 debug('attempting restart...')


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[14/50] cassandra-dtest git commit: Restrict data dir for sstabledump test

2017-07-12 Thread zznate
Restrict data dir for sstabledump test


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/c17e011b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/c17e011b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/c17e011b

Branch: refs/heads/master
Commit: c17e011b944ed29b57c29b6f6b2258de0d35f90b
Parents: 8b58b70
Author: Yuki Morishita 
Authored: Mon Apr 3 10:01:57 2017 +0900
Committer: Philip Thompson 
Committed: Mon Apr 3 10:05:47 2017 -0400

--
 offline_tools_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c17e011b/offline_tools_test.py
--
diff --git a/offline_tools_test.py b/offline_tools_test.py
index 89d7ca9..3f9c2b7 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -389,6 +389,8 @@ class TestOfflineTools(Tester):
 Test that sstabledump functions properly offline to output the 
contents of a table.
 """
 cluster = self.cluster
+# disable JBOD conf since the test expects exactly one SSTable to be 
written.
+cluster.set_datadir_count(1)
 cluster.populate(1).start(wait_for_binary_proto=True)
 [node1] = cluster.nodelist()
 session = self.patient_cql_connection(node1)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[13/50] cassandra-dtest git commit: Merge pull request #1458 from pauloricardomg/12929

2017-07-12 Thread zznate
Merge pull request #1458 from pauloricardomg/12929

Changed streaming keep alive test to use byteman (CASSANDRA-12929)

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/8b58b700
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/8b58b700
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/8b58b700

Branch: refs/heads/master
Commit: 8b58b7004658350af3083dab30de395353c52d41
Parents: ee6aa78 737aab2
Author: Paulo Ricardo Motta Gomes 
Authored: Mon Apr 3 10:51:40 2017 -0300
Committer: GitHub 
Committed: Mon Apr 3 10:51:40 2017 -0300

--
 bootstrap_test.py   | 23 ++-
 byteman/stream_5s_sleep.btm | 13 +
 2 files changed, 27 insertions(+), 9 deletions(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/50] cassandra-dtest git commit: Merge pull request #1448 from stef1927/13071

2017-07-12 Thread zznate
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master [created] f1b0ba8a1


Merge pull request #1448 from stef1927/13071

New test for CASSANDRA-13071

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6bb074f7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6bb074f7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6bb074f7

Branch: refs/heads/master
Commit: 6bb074f7c00a0067128c19845175da2868483f9f
Parents: 495165a 87872e5
Author: Stefania Alborghetti 
Authored: Thu Mar 2 10:03:59 2017 +0800
Committer: GitHub 
Committed: Thu Mar 2 10:03:59 2017 +0800

--
 cqlsh_tests/cqlsh_copy_tests.py | 87 
 1 file changed, 87 insertions(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[11/50] cassandra-dtest git commit: Update current branch versions in upgrade manifest

2017-07-12 Thread zznate
Update current branch versions in upgrade manifest


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/ee6aa78f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/ee6aa78f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/ee6aa78f

Branch: refs/heads/master
Commit: ee6aa78f34390070b62c0d991d14fb018565f57e
Parents: 795d91c
Author: Joel Knighton 
Authored: Thu Mar 30 16:57:30 2017 -0500
Committer: Philip Thompson 
Committed: Fri Mar 31 10:58:10 2017 -0400

--
 upgrade_tests/upgrade_manifest.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/ee6aa78f/upgrade_tests/upgrade_manifest.py
--
diff --git a/upgrade_tests/upgrade_manifest.py 
b/upgrade_tests/upgrade_manifest.py
index 6873127..d5ed776 100644
--- a/upgrade_tests/upgrade_manifest.py
+++ b/upgrade_tests/upgrade_manifest.py
@@ -67,13 +67,13 @@ indev_2_0_x = None  # None if release not likely
 current_2_0_x = VersionMeta(name='current_2_0_x', family='2.0.x', 
variant='current', version='2.0.17', min_proto_v=1, max_proto_v=2, 
java_versions=(7,))
 
 indev_2_1_x = VersionMeta(name='indev_2_1_x', family='2.1.x', variant='indev', 
version='github:apache/cassandra-2.1', min_proto_v=1, max_proto_v=3, 
java_versions=(7, 8))
-current_2_1_x = VersionMeta(name='current_2_1_x', family='2.1.x', 
variant='current', version='2.1.15', min_proto_v=1, max_proto_v=3, 
java_versions=(7, 8))
+current_2_1_x = VersionMeta(name='current_2_1_x', family='2.1.x', 
variant='current', version='2.1.17', min_proto_v=1, max_proto_v=3, 
java_versions=(7, 8))
 
 indev_2_2_x = VersionMeta(name='indev_2_2_x', family='2.2.x', variant='indev', 
version='github:apache/cassandra-2.2', min_proto_v=1, max_proto_v=4, 
java_versions=(7, 8))
-current_2_2_x = VersionMeta(name='current_2_2_x', family='2.2.x', 
variant='current', version='2.2.8', min_proto_v=1, max_proto_v=4, 
java_versions=(7, 8))
+current_2_2_x = VersionMeta(name='current_2_2_x', family='2.2.x', 
variant='current', version='2.2.9', min_proto_v=1, max_proto_v=4, 
java_versions=(7, 8))
 
 indev_3_0_x = VersionMeta(name='indev_3_0_x', family='3.0.x', variant='indev', 
version='github:apache/cassandra-3.0', min_proto_v=3, max_proto_v=4, 
java_versions=(8,))
-current_3_0_x = VersionMeta(name='current_3_0_x', family='3.0.x', 
variant='current', version='3.0.9', min_proto_v=3, max_proto_v=4, 
java_versions=(8,))
+current_3_0_x = VersionMeta(name='current_3_0_x', family='3.0.x', 
variant='current', version='3.0.12', min_proto_v=3, max_proto_v=4, 
java_versions=(8,))
 
 indev_3_x = VersionMeta(name='indev_3_x', family='3.x', variant='indev', 
version='github:apache/cassandra-3.11', min_proto_v=3, max_proto_v=4, 
java_versions=(8,))
 current_3_x = VersionMeta(name='current_3_x', family='3.x', variant='current', 
version='3.10', min_proto_v=3, max_proto_v=4, java_versions=(8,))


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[49/50] cassandra-dtest git commit: Add a test for CASSANDRA-13346 (#1467)

2017-07-12 Thread zznate
Add a test for CASSANDRA-13346 (#1467)

* Add a test for CASSANDRA-13346; Optionally make reading JMX attributes 
verbose or not

* Compliance with Pep8


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6f4e41e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6f4e41e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6f4e41e0

Branch: refs/heads/master
Commit: 6f4e41e04c3d48f1dbbcd0fc636e39e8d114a6be
Parents: 058b952
Author: juiceblender 
Authored: Fri Jul 7 18:39:00 2017 +1000
Committer: Philip Thompson 
Committed: Fri Jul 7 10:39:00 2017 +0200

--
 jmx_test.py   | 93 +-
 tools/jmxutils.py | 12 +++
 2 files changed, 90 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/6f4e41e0/jmx_test.py
--
diff --git a/jmx_test.py b/jmx_test.py
index 7df84ac..16c1ece 100644
--- a/jmx_test.py
+++ b/jmx_test.py
@@ -13,7 +13,6 @@ from tools.misc import generate_ssl_stores
 
 
 class TestJMX(Tester):
-
 def netstats_test(self):
 """
 Check functioning of nodetool netstats, especially with restarts.
@@ -48,7 +47,8 @@ class TestJMX(Tester):
 if not isinstance(e, ToolError):
 raise
 else:
-self.assertRegexpMatches(str(e), "ConnectException: 
'Connection refused( \(Connection refused\))?'.")
+self.assertRegexpMatches(str(e),
+ "ConnectException: 'Connection 
refused( \(Connection refused\))?'.")
 
 self.assertTrue(running, msg='node1 never started')
 
@@ -69,9 +69,12 @@ class TestJMX(Tester):
 debug('Version {} typeName {}'.format(version, typeName))
 
 # TODO the keyspace and table name are capitalized in 2.0
-memtable_size = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1', name='AllMemtablesHeapSize')
-disk_size = make_mbean('metrics', type=typeName, keyspace='keyspace1', 
scope='standard1', name='LiveDiskSpaceUsed')
-sstable_count = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1', name='LiveSSTableCount')
+memtable_size = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1',
+   name='AllMemtablesHeapSize')
+disk_size = make_mbean('metrics', type=typeName, keyspace='keyspace1', 
scope='standard1',
+   name='LiveDiskSpaceUsed')
+sstable_count = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1',
+   name='LiveSSTableCount')
 
 with JolokiaAgent(node1) as jmx:
 mem_size = jmx.read_attribute(memtable_size, "Value")
@@ -88,6 +91,76 @@ class TestJMX(Tester):
 sstables = jmx.read_attribute(sstable_count, "Value")
 self.assertGreaterEqual(int(sstables), 1)
 
+@since('3.0')
+def mv_metric_mbeans_release_test(self):
+"""
+Test that the right mbeans are created and released when creating mvs
+"""
+cluster = self.cluster
+cluster.populate(1)
+node = cluster.nodelist()[0]
+remove_perf_disable_shared_mem(node)
+cluster.start(wait_for_binary_proto=True)
+
+node.run_cqlsh(cmds="""
+CREATE KEYSPACE mvtest WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor': 1 };
+CREATE TABLE mvtest.testtable (
+foo int,
+bar text,
+baz text,
+PRIMARY KEY (foo, bar)
+);
+
+CREATE MATERIALIZED VIEW mvtest.testmv AS
+SELECT foo, bar, baz FROM mvtest.testtable WHERE
+foo IS NOT NULL AND bar IS NOT NULL AND baz IS NOT NULL
+PRIMARY KEY (foo, bar, baz);""")
+
+table_memtable_size = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+ name='AllMemtablesHeapSize')
+table_view_read_time = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+  name='ViewReadTime')
+table_view_lock_time = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+  name='ViewLockAcquireTime')
+mv_memtable_size = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testmv',
+  name='AllMemtablesHeapSize')
+mv_view_read_time = 

[32/50] cassandra-dtest git commit: Hinted handoff setmaxwindow test should only run on versions >= 4.0

2017-07-12 Thread zznate
Hinted handoff setmaxwindow test should only run on versions >= 4.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/7f3566ad
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/7f3566ad
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/7f3566ad

Branch: refs/heads/master
Commit: 7f3566ad7b27b9caa8ceccb361b09e42113aa41b
Parents: bea71d8
Author: Joel Knighton 
Authored: Wed May 24 14:20:28 2017 -0500
Committer: Philip Thompson 
Committed: Tue May 30 14:18:24 2017 +0200

--
 hintedhandoff_test.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/7f3566ad/hintedhandoff_test.py
--
diff --git a/hintedhandoff_test.py b/hintedhandoff_test.py
index 1ed3305..6345e3c 100644
--- a/hintedhandoff_test.py
+++ b/hintedhandoff_test.py
@@ -121,6 +121,7 @@ class TestHintedHandoffConfig(Tester):
 
 self._do_hinted_handoff(node1, node2, True)
 
+@since('4.0')
 def hintedhandoff_setmaxwindow_test(self):
 """
 Test global hinted handoff against max_hint_window_in_ms update via 
nodetool


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[43/50] cassandra-dtest git commit: Add dtests for compatibility flag introduced in CASSANDRA-13004 (#1485)

2017-07-12 Thread zznate
Add dtests for compatibility flag introduced in CASSANDRA-13004 (#1485)

Add dtests for compatibility flag introduced in CASSANDRA-13004

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6847bc10
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6847bc10
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6847bc10

Branch: refs/heads/master
Commit: 6847bc10c2a3fa3ee911b0cf3826920bc4dbad18
Parents: c368a90
Author: Alex Petrov 
Authored: Tue Jun 20 20:25:51 2017 +0200
Committer: GitHub 
Committed: Tue Jun 20 20:25:51 2017 +0200

--
 upgrade_tests/compatibility_flag_test.py | 132 ++
 1 file changed, 132 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/6847bc10/upgrade_tests/compatibility_flag_test.py
--
diff --git a/upgrade_tests/compatibility_flag_test.py 
b/upgrade_tests/compatibility_flag_test.py
new file mode 100644
index 000..1abeaef
--- /dev/null
+++ b/upgrade_tests/compatibility_flag_test.py
@@ -0,0 +1,132 @@
+from dtest import Tester, debug
+from tools.assertions import assert_all
+from tools.decorators import since
+
+
+class CompatibilityFlagTest(Tester):
+"""
+Test 30 protocol compatibility flag
+
+@jira CASSANDRA-13004
+"""
+
+def _compatibility_flag_off_with_30_node_test(self, from_version):
+"""
+Test compatibility with 30 protocol version: if the flag is unset, 
schema agreement can not be reached
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+cluster.set_install_dir(version=from_version)
+cluster.start(wait_for_binary_proto=True)
+
+node1.drain()
+node1.watch_log_for("DRAINED")
+node1.stop(wait_other_notice=False)
+debug("Upgrading to current version")
+self.set_node_to_current_version(node1)
+node1.start(wait_for_binary_proto=True)
+
+node1.watch_log_for("Not pulling schema because versions match or 
shouldPullSchemaFrom returned false", filename='debug.log')
+node2.watch_log_for("Not pulling schema because versions match or 
shouldPullSchemaFrom returned false", filename='debug.log')
+
+def _compatibility_flag_on_with_30_test(self, from_version):
+"""
+Test compatibility with 30 protocol version: if the flag is set, 
schema agreement can be reached
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+cluster.set_install_dir(version=from_version)
+cluster.start(wait_for_binary_proto=True)
+
+node1.drain()
+node1.watch_log_for("DRAINED")
+node1.stop(wait_other_notice=False)
+debug("Upgrading to current version")
+self.set_node_to_current_version(node1)
+node1.start(jvm_args=["-Dcassandra.force_3_0_protocol_version=true"], 
wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _compatibility_flag_on_3014_test(self):
+"""
+Test compatibility between post-13004 nodes, one of which is in 
compatibility mode
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+
+node1.start(wait_for_binary_proto=True)
+node2.start(jvm_args=["-Dcassandra.force_3_0_protocol_version=true"], 
wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _compatibility_flag_off_3014_test(self):
+"""
+Test compatibility between post-13004 nodes
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+
+node1.start(wait_for_binary_proto=True)
+node2.start(wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _run_test(self, session):
+# Make sure the system_auth table will get replicated to the node that 
we're going to replace
+
+session.execute("CREATE KEYSPACE test WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '2'} ;")
+session.execute("CREATE TABLE test.test (a text PRIMARY KEY, b text, c 
text);")
+
+for i in range(1, 6):
+session.execute("INSERT INTO test.test (a, b, c) VALUES ('{}', 
'{}', '{}');".format(i, i + 1, i + 2))
+
+assert_all(session,
+   "SELECT * FROM test.test",
+   [[str(i), str(i + 1), str(i + 2)] for i in range(1, 6)], 
ignore_order=True)
+
+

[28/50] cassandra-dtest git commit: Fix version check after C* ticket was committed

2017-07-12 Thread zznate
Fix version check after C* ticket was committed


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/5c99d202
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/5c99d202
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/5c99d202

Branch: refs/heads/master
Commit: 5c99d2028d1b03c2543dd81b90700922aa9ec93b
Parents: afda2d4
Author: Sam Tunnicliffe 
Authored: Thu May 11 18:33:47 2017 +0100
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/5c99d202/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index 4feadc1..e7bc11c 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -984,12 +984,12 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 
 def get_index_output(self, index, ks, table, col):
 # a quoted index name (e.g. "FooIndex") is only correctly echoed by 
DESCRIBE
-# from 3.0.11 & 3.10
+# from 3.0.14 & 3.11
 if index[0] == '"' and index[-1] == '"':
 version = self.cluster.version()
-if version >= LooseVersion('3.10'):
+if version >= LooseVersion('3.11'):
 pass
-elif LooseVersion('3.1') > version >= LooseVersion('3.0.11'):
+elif LooseVersion('3.1') > version >= LooseVersion('3.0.14'):
 pass
 else:
 index = index[1:-1]


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/50] cassandra-dtest git commit: Merge pull request #1454 from pcmanus/test-13382

2017-07-12 Thread zznate
Merge pull request #1454 from pcmanus/test-13382

Add test to reproduce CASSANDRA-13382

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/795d91c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/795d91c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/795d91c0

Branch: refs/heads/master
Commit: 795d91c0ced1e9ef76368a53dd49de02ea2bc629
Parents: 7f29ad6 aa0c412
Author: Sylvain Lebresne 
Authored: Tue Mar 28 14:42:31 2017 +0200
Committer: GitHub 
Committed: Tue Mar 28 14:42:31 2017 +0200

--
 upgrade_tests/cql_tests.py | 25 +
 1 file changed, 25 insertions(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[27/50] cassandra-dtest git commit: Include quoted index names in describe test

2017-07-12 Thread zznate
Include quoted index names in describe test


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/f2925484
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/f2925484
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/f2925484

Branch: refs/heads/master
Commit: f2925484f8e3375a3373b689b425a80f7ec54f36
Parents: 6540ba4
Author: Sam Tunnicliffe 
Authored: Thu Oct 27 09:25:26 2016 +0100
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/f2925484/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index caacaa5..7734848 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -688,6 +688,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 CREATE KEYSPACE test WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
 CREATE TABLE test.users ( userid text PRIMARY KEY, firstname 
text, lastname text, age int);
 CREATE INDEX myindex ON test.users (age);
+CREATE INDEX "QuotedNameIndex" on test.users (firstName);
 CREATE TABLE test.test (id int, col int, val text, PRIMARY 
KEY(id, col));
 CREATE INDEX ON test.test (col);
 CREATE INDEX ON test.test (val)
@@ -738,7 +739,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.execute(cql='DESCRIBE test.myindex', expected_err="'myindex' not 
found in keyspace 'test'")
 self.execute(cql="""
 CREATE TABLE test.users ( userid text PRIMARY KEY, firstname 
text, lastname text, age int);
-CREATE INDEX myindex ON test.users (age)
+CREATE INDEX myindex ON test.users (age);
+CREATE INDEX "QuotedNameIndex" on test.users (firstname)
 """)
 self.execute(cql="DESCRIBE test.users", 
expected_output=self.get_users_table_output())
 self.execute(cql='DESCRIBE test.myindex', 
expected_output=self.get_index_output('myindex', 'test', 'users', 'age'))
@@ -748,6 +750,10 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.execute(cql='DESCRIBE test.myindex', expected_err="'myindex' not 
found in keyspace 'test'")
 self.execute(cql='CREATE INDEX myindex ON test.users (age)')
 self.execute(cql='DESCRIBE INDEX test.myindex', 
expected_output=self.get_index_output('myindex', 'test', 'users', 'age'))
+self.execute(cql='DROP INDEX test."QuotedNameIndex"')
+self.execute(cql='DESCRIBE test."QuotedNameIndex"', 
expected_err="'QuotedNameIndex' not found in keyspace 'test'")
+self.execute(cql='CREATE INDEX "QuotedNameIndex" ON test.users 
(firstname)')
+self.execute(cql='DESCRIBE INDEX test."QuotedNameIndex"', 
expected_output=self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname'))
 
 # Alter table. Renaming indexed columns is not allowed, and since 3.0 
neither is dropping them
 # Prior to 3.0 the index would have been automatically dropped, but 
now we need to explicitly do that.
@@ -929,7 +935,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
+""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 elif self.cluster.version() >= LooseVersion('3.0'):
 return """
 CREATE TABLE test.users (
@@ -951,7 +958,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
+""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 else:
 return """
 CREATE TABLE test.users (
@@ -972,7 +980,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND 

[05/50] cassandra-dtest git commit: Merge pull request #1451 from tjake/mv-build-wait

2017-07-12 Thread zznate
Merge pull request #1451 from tjake/mv-build-wait

Fix for materialized_views_tests to wait for the build to finish befo…

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/c6e6a4ed
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/c6e6a4ed
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/c6e6a4ed

Branch: refs/heads/master
Commit: c6e6a4edc6b2e1e1c346696c29afd89150e49b06
Parents: 13a8ec4 21d4736
Author: Jake Luciani 
Authored: Thu Mar 9 11:34:04 2017 -0500
Committer: GitHub 
Committed: Thu Mar 9 11:34:04 2017 -0500

--
 materialized_views_test.py | 21 +
 1 file changed, 21 insertions(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[35/50] cassandra-dtest git commit: Merge pull request #1477 from stef1927/13559

2017-07-12 Thread zznate
Merge pull request #1477 from stef1927/13559

Test for CASSANDRA-13559

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/ef84f767
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/ef84f767
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/ef84f767

Branch: refs/heads/master
Commit: ef84f7679ad64b708cb19c5294e2e670fb69df25
Parents: 6f7caba bbe136c
Author: Stefania Alborghetti 
Authored: Fri Jun 9 07:36:35 2017 +0800
Committer: GitHub 
Committed: Fri Jun 9 07:36:35 2017 +0800

--
 upgrade_tests/regression_test.py | 42 ++-
 1 file changed, 41 insertions(+), 1 deletion(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[20/50] cassandra-dtest git commit: Merge pull request #1456 from stef1927/13364

2017-07-12 Thread zznate
Merge pull request #1456 from stef1927/13364

Added test case for CASSANDRA-13364

Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/0692e2b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/0692e2b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/0692e2b6

Branch: refs/heads/master
Commit: 0692e2b63b3efe507b4c87be3dd3afb90042b8f7
Parents: 8513c47 ec6b958
Author: Stefania Alborghetti 
Authored: Fri Apr 7 09:11:07 2017 +0800
Committer: GitHub 
Committed: Fri Apr 7 09:11:07 2017 +0800

--
 cqlsh_tests/cqlsh_copy_tests.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[50/50] cassandra-dtest git commit: Revert "Adds the ability to use uncompressed chunks in compressed files"

2017-07-12 Thread zznate
Revert "Adds the ability to use uncompressed chunks in compressed files"

This reverts commit 058b95289bf815495fced0ac55a78bcceceea9fa.


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/f1b0ba8a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/f1b0ba8a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/f1b0ba8a

Branch: refs/heads/master
Commit: f1b0ba8a1d60937b79ccac43b23c887da8ced32a
Parents: 6f4e41e
Author: Joel Knighton 
Authored: Wed Jul 12 12:11:02 2017 -0500
Committer: Joel Knighton 
Committed: Wed Jul 12 12:11:02 2017 -0500

--
 cqlsh_tests/cqlsh_tests.py | 44 ++---
 1 file changed, 2 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/f1b0ba8a/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index dee1891..e7bc11c 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -847,25 +847,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 PRIMARY KEY (id, col)
 """
 
-if self.cluster.version() >= LooseVersion('4.0'):
-ret += """
-) WITH CLUSTERING ORDER BY (col ASC)
-AND bloom_filter_fp_chance = 0.01
-AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
-AND comment = ''
-AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
-AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
-AND crc_check_chance = 1.0
-AND dclocal_read_repair_chance = 0.1
-AND default_time_to_live = 0
-AND gc_grace_seconds = 864000
-AND max_index_interval = 2048
-AND memtable_flush_period_in_ms = 0
-AND min_index_interval = 128
-AND read_repair_chance = 0.0
-AND speculative_retry = '99PERCENTILE';
-"""
-elif self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('3.9'):
 ret += """
 ) WITH CLUSTERING ORDER BY (col ASC)
 AND bloom_filter_fp_chance = 0.01
@@ -931,29 +913,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 return ret + "\n" + col_idx_def
 
 def get_users_table_output(self):
-if self.cluster.version() >= LooseVersion('4.0'):
-return """
-CREATE TABLE test.users (
-userid text PRIMARY KEY,
-age int,
-firstname text,
-lastname text
-) WITH bloom_filter_fp_chance = 0.01
-AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
-AND comment = ''
-AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
-AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
-AND crc_check_chance = 1.0
-AND dclocal_read_repair_chance = 0.1
-AND default_time_to_live = 0
-AND gc_grace_seconds = 864000
-AND max_index_interval = 2048
-AND memtable_flush_period_in_ms = 0
-AND min_index_interval = 128
-AND read_repair_chance = 0.0
-AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
-elif self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('3.9'):
 return """
 CREATE TABLE test.users (
 userid text PRIMARY KEY,


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/50] cassandra-dtest git commit: Fix for materialized_views_tests to wait for the build to finish before any checks

2017-07-12 Thread zznate
Fix for materialized_views_tests to wait for the build to finish before any 
checks


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/21d47364
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/21d47364
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/21d47364

Branch: refs/heads/master
Commit: 21d473644ddd1a892b76db419ae454fb72f83465
Parents: 13a8ec4
Author: T Jake Luciani 
Authored: Thu Feb 23 17:12:48 2017 -0500
Committer: T Jake Luciani 
Committed: Thu Mar 9 11:31:32 2017 -0500

--
 materialized_views_test.py | 21 +
 1 file changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/21d47364/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index e8c86c0..0c9cdcb 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -90,6 +90,21 @@ class TestMaterializedViews(Tester):
 time.sleep(0.1)
 attempts -= 1
 
+def _wait_for_view(self, ks, view):
+debug("waiting for view")
+
+def _view_build_finished(node):
+s = self.patient_exclusive_cql_connection(node)
+result = list(s.execute("SELECT * FROM 
system.views_builds_in_progress WHERE keyspace_name='%s' AND view_name='%s'" % 
(ks, view)))
+return len(result) == 0
+
+for node in self.cluster.nodelist():
+if node.is_running():
+attempts = 50  # 1 sec per attempt, so 50 seconds total
+while attempts > 0 and not _view_build_finished(node):
+time.sleep(1)
+attempts -= 1
+
 def _insert_data(self, session):
 # insert data
 insert_stmt = "INSERT INTO users (username, password, gender, state, 
birth_year) VALUES "
@@ -184,6 +199,9 @@ class TestMaterializedViews(Tester):
 session.execute(("CREATE MATERIALIZED VIEW t_by_v AS SELECT * FROM t 
WHERE v IS NOT NULL "
  "AND id IS NOT NULL PRIMARY KEY (v, id)"))
 
+debug("wait for view to build")
+self._wait_for_view("ks", "t_by_v")
+
 debug("wait that all batchlogs are replayed")
 self._replay_batchlogs()
 
@@ -204,6 +222,9 @@ class TestMaterializedViews(Tester):
 session.execute(("CREATE MATERIALIZED VIEW t_by_v AS SELECT * FROM t 
WHERE v IS NOT NULL "
  "AND id IS NOT NULL PRIMARY KEY (v, id)"))
 
+debug("wait for view to build")
+self._wait_for_view("ks", "t_by_v")
+
 debug("wait that all batchlogs are replayed")
 self._replay_batchlogs()
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/50] cassandra-dtest git commit: Add repair test for CASSANDRA-13153

2017-07-12 Thread zznate
Add repair test for CASSANDRA-13153


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/7f29ad68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/7f29ad68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/7f29ad68

Branch: refs/heads/master
Commit: 7f29ad6885137d016fafa76ffabe14b699578668
Parents: cf21631
Author: Stefan Podkowinski 
Authored: Wed Feb 22 13:24:32 2017 +0100
Committer: Philip Thompson 
Committed: Tue Mar 21 17:03:30 2017 -0400

--
 repair_tests/repair_test.py | 56 
 1 file changed, 56 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/7f29ad68/repair_tests/repair_test.py
--
diff --git a/repair_tests/repair_test.py b/repair_tests/repair_test.py
index ff0ccf1..5148b00 100644
--- a/repair_tests/repair_test.py
+++ b/repair_tests/repair_test.py
@@ -1,5 +1,6 @@
 import threading
 import time
+import re
 from collections import namedtuple
 from threading import Thread
 from unittest import skip, skipIf
@@ -262,6 +263,61 @@ class TestRepair(BaseRepairTest):
 for node in cluster.nodelist():
 self.assertFalse(node.grep_log("Starting anticompaction"))
 
+def _get_repaired_data(self, node, keyspace):
+"""
+Based on incremental_repair_test.py:TestIncRepair implementation.
+"""
+_sstable_name = re.compile('SSTable: (.+)')
+_repaired_at = re.compile('Repaired at: (\d+)')
+_sstable_data = namedtuple('_sstabledata', ('name', 'repaired'))
+
+out = node.run_sstablemetadata(keyspace=keyspace).stdout
+
+def matches(pattern):
+return filter(None, [pattern.match(l) for l in out.split('\n')])
+
+names = [m.group(1) for m in matches(_sstable_name)]
+repaired_times = [int(m.group(1)) for m in matches(_repaired_at)]
+
+self.assertTrue(names)
+self.assertTrue(repaired_times)
+return [_sstable_data(*a) for a in zip(names, repaired_times)]
+
+@since('2.2.10', '4')
+def no_anticompaction_of_already_repaired_test(self):
+"""
+* Launch three node cluster and stress with RF2
+* Do incremental repair to have all sstables flagged as repaired
+* Stop node2, stress, start again and run full -pr repair
+* Verify that none of the already repaired sstables have been 
anti-compacted again
+@jira_ticket CASSANDRA-13153
+"""
+
+cluster = self.cluster
+debug("Starting cluster..")
+cluster.populate(3).start(wait_for_binary_proto=True)
+node1, node2, node3 = cluster.nodelist()
+# we use RF to make sure to cover only a set of sub-ranges when doing 
-full -pr
+node1.stress(stress_options=['write', 'n=50K', 'no-warmup', 'cl=ONE', 
'-schema', 'replication(factor=2)', '-rate', 'threads=50'])
+# disable compaction to make sure that we won't create any new 
sstables with repairedAt 0
+node1.nodetool('disableautocompaction keyspace1 standard1')
+# Do incremental repair of all ranges. All sstables are expected for 
have repairedAt set afterwards.
+node1.nodetool("repair keyspace1 standard1")
+meta = self._get_repaired_data(node1, 'keyspace1')
+repaired = set([m for m in meta if m.repaired > 0])
+self.assertEquals(len(repaired), len(meta))
+
+# stop node2, stress and start full repair to find out how synced 
ranges affect repairedAt values
+node2.stop(wait_other_notice=True)
+node1.stress(stress_options=['write', 'n=40K', 'no-warmup', 'cl=ONE', 
'-rate', 'threads=50'])
+node2.start(wait_for_binary_proto=True, wait_other_notice=True)
+node1.nodetool("repair -full -pr keyspace1 standard1")
+
+meta = self._get_repaired_data(node1, 'keyspace1')
+repairedAfterFull = set([m for m in meta if m.repaired > 0])
+# already repaired sstables must remain untouched
+self.assertEquals(repaired.intersection(repairedAfterFull), repaired)
+
 @since('2.2.1', '4')
 def anticompaction_after_normal_repair_test(self):
 """


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[08/50] cassandra-dtest git commit: Add test to reproduce CASSANDRA-13382

2017-07-12 Thread zznate
Add test to reproduce CASSANDRA-13382


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/aa0c412b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/aa0c412b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/aa0c412b

Branch: refs/heads/master
Commit: aa0c412b69036a2980a2c5f228cd31cf589c684b
Parents: 7f29ad6
Author: Sylvain Lebresne 
Authored: Mon Mar 27 14:55:26 2017 +0200
Committer: Sylvain Lebresne 
Committed: Tue Mar 28 14:39:37 2017 +0200

--
 upgrade_tests/cql_tests.py | 25 +
 1 file changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/aa0c412b/upgrade_tests/cql_tests.py
--
diff --git a/upgrade_tests/cql_tests.py b/upgrade_tests/cql_tests.py
index bd34269..d9245d2 100644
--- a/upgrade_tests/cql_tests.py
+++ b/upgrade_tests/cql_tests.py
@@ -5384,6 +5384,31 @@ class TestCQL(UpgradeTester):
 
 assert_one(cursor, "SELECT * FROM foo.bar", [0, 0])
 
+@since('3.0')
+def materialized_view_simple_test(self):
+"""
+Test that creates and populate a simple materialized view.
+@jira_ticket CASSANDRA-13382
+"""
+cursor = self.prepare()
+
+cursor.execute("CREATE KEYSPACE foo WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': 1}")
+cursor.execute("CREATE TABLE foo.test1 (k int, t int, v int, PRIMARY 
KEY(k, t))")
+
+cursor.execute("""
+CREATE MATERIALIZED VIEW foo.view1
+AS SELECT * FROM foo.test1
+WHERE v IS NOT NULL AND t IS NOT NULL
+PRIMARY KEY (k, v, t)
+""")
+
+for i in range(0, 10):
+cursor.execute("INSERT INTO foo.test1(k, t, v) VALUES (0, %d, %d)" 
% (i, 10 - i - 1))
+
+for is_upgraded, cursor in self.do_upgrade(cursor):
+debug("Querying {} node".format("upgraded" if is_upgraded else 
"old"))
+assert_all(cursor, "SELECT v, t FROM foo.view1 WHERE k = 0", [[i, 
10 - i - 1] for i in range(0, 10)])
+
 
 topology_specs = [
 {'NODES': 3,


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[06/50] cassandra-dtest git commit: Fix netstats_test to accept "Connection refused" messages pre- and post-Java 1.8u111

2017-07-12 Thread zznate
Fix netstats_test to accept "Connection refused" messages pre- and post-Java 
1.8u111


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/cf216319
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/cf216319
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/cf216319

Branch: refs/heads/master
Commit: cf2163198d1dee1339ee2d771d8a426e7dc3eb9d
Parents: c6e6a4e
Author: Joel Knighton 
Authored: Mon Mar 13 11:51:15 2017 -0500
Committer: Joel Knighton 
Committed: Wed Mar 15 11:50:33 2017 -0500

--
 jmx_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/cf216319/jmx_test.py
--
diff --git a/jmx_test.py b/jmx_test.py
index ec817b9..7251b12 100644
--- a/jmx_test.py
+++ b/jmx_test.py
@@ -28,7 +28,7 @@ class TestJMX(Tester):
 node1.flush()
 node1.stop(gently=False)
 
-with self.assertRaisesRegexp(ToolError, "ConnectException: 'Connection 
refused'."):
+with self.assertRaisesRegexp(ToolError, "ConnectException: 'Connection 
refused( \(Connection refused\))?'."):
 node1.nodetool('netstats')
 
 # don't wait; we're testing for when nodetool is called on a node 
mid-startup
@@ -48,7 +48,7 @@ class TestJMX(Tester):
 if not isinstance(e, ToolError):
 raise
 else:
-self.assertIn("ConnectException: 'Connection refused'.", 
str(e))
+self.assertRegexpMatches(str(e), "ConnectException: 
'Connection refused( \(Connection refused\))?'.")
 
 self.assertTrue(running, msg='node1 never started')
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[16/50] cassandra-dtest git commit: Restrict data dir for no_anticompaction_of_already_repaired_test

2017-07-12 Thread zznate
Restrict data dir for no_anticompaction_of_already_repaired_test


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/2e24587b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/2e24587b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/2e24587b

Branch: refs/heads/master
Commit: 2e24587b8c206eccca71ba7d93e7e0c75e718644
Parents: c17e011
Author: Paulo Motta 
Authored: Tue Apr 4 17:51:27 2017 -0300
Committer: Paulo Motta 
Committed: Tue Apr 4 17:51:27 2017 -0300

--
 repair_tests/repair_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/2e24587b/repair_tests/repair_test.py
--
diff --git a/repair_tests/repair_test.py b/repair_tests/repair_test.py
index 5148b00..5276556 100644
--- a/repair_tests/repair_test.py
+++ b/repair_tests/repair_test.py
@@ -295,6 +295,8 @@ class TestRepair(BaseRepairTest):
 
 cluster = self.cluster
 debug("Starting cluster..")
+# disable JBOD conf since the test expects sstables to be on the same 
disk
+cluster.set_datadir_count(1)
 cluster.populate(3).start(wait_for_binary_proto=True)
 node1, node2, node3 = cluster.nodelist()
 # we use RF to make sure to cover only a set of sub-ranges when doing 
-full -pr


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[22/50] cassandra-dtest git commit: CASSANDRA-13483: fixed test failure in snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column

2017-07-12 Thread zznate
CASSANDRA-13483: fixed test failure in 
snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/0667de02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/0667de02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/0667de02

Branch: refs/heads/master
Commit: 0667de025dd4e85dbae1b30db4a2e189c46ff47f
Parents: e6b4706
Author: Zhao Yang 
Authored: Mon May 1 00:34:58 2017 +0800
Committer: Philip Thompson 
Committed: Mon May 1 20:02:40 2017 -0400

--
 snapshot_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/0667de02/snapshot_test.py
--
diff --git a/snapshot_test.py b/snapshot_test.py
index 7169a7c..563af81 100644
--- a/snapshot_test.py
+++ b/snapshot_test.py
@@ -13,6 +13,7 @@ from tools.assertions import assert_one
 from tools.files import replace_in_file, safe_mkdtemp
 from tools.hacks import advance_to_next_cl_segment
 from tools.misc import ImmutableMapping
+from tools.decorators import since
 
 
 class SnapshotTester(Tester):
@@ -114,6 +115,7 @@ class TestSnapshot(SnapshotTester):
 
 self.assertEqual(rows[0][0], 100)
 
+@since('3.11')
 def test_snapshot_and_restore_dropping_a_column(self):
 """
 @jira_ticket CASSANDRA-13276


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[45/50] cassandra-dtest git commit: Expand 9673 tests to also run on 3.x

2017-07-12 Thread zznate
Expand 9673 tests to also run on 3.x


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/d2d9e6d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/d2d9e6d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/d2d9e6d4

Branch: refs/heads/master
Commit: d2d9e6d4ef638233b8dc403c25c2265cc40df9be
Parents: 50e1e7b
Author: Philip Thompson 
Authored: Tue Jul 4 15:27:28 2017 +0200
Committer: Philip Thompson 
Committed: Wed Jul 5 11:51:36 2017 +0200

--
 batch_test.py | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/d2d9e6d4/batch_test.py
--
diff --git a/batch_test.py b/batch_test.py
index 6dcf786..e67d185 100644
--- a/batch_test.py
+++ b/batch_test.py
@@ -285,51 +285,51 @@ class TestBatch(Tester):
 assert_one(session, "SELECT * FROM users", [0, 'Jack', 'Sparrow'])
 assert_one(session, "SELECT * FROM dogs", [0, 'Pluto'])
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 def logged_batch_compatibility_1_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have one 3.0 node and two 2.2 nodes and we send the batch 
request to the 3.0 node.
+Here we have one 3.0/3.x node and two 2.2 nodes and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 1, 
'github:apache/cassandra-2.2', 2, 4)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_2_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have one 3.0 node and two 2.1 nodes and we send the batch 
request to the 3.0 node.
+Here we have one 3.0/3.x node and two 2.1 nodes and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 1, 
'github:apache/cassandra-2.1', 2, 3)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_3_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.1 node and we send the batch 
request to the 3.0 node.
+Here we have two 3.0/3.x nodes and one 2.1 node and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 2, 
'github:apache/cassandra-2.1', 1, 3)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 def logged_batch_compatibility_4_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.2 node and we send the batch 
request to the 2.2 node.
+Here we have two 3.0/3.x nodes and one 2.2 node and we send the batch 
request to the 2.2 node.
 """
 self._logged_batch_compatibility_test(2, 2, 
'github:apache/cassandra-2.2', 1, 4)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_5_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.1 node and we send the batch 
request to the 2.1 node.
+Here we have two 3.0/3.x nodes and one 2.1 node and we send the batch 
request to the 2.1 node.
 """
 self._logged_batch_compatibility_test(2, 2, 
'github:apache/cassandra-2.1', 1, 3)
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[34/50] cassandra-dtest git commit: Test for CASSANDRA-13559

2017-07-12 Thread zznate
Test for CASSANDRA-13559


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/bbe136cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/bbe136cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/bbe136cd

Branch: refs/heads/master
Commit: bbe136cde81a1752d8922dee24391a24400a5b68
Parents: 7f3566a
Author: Stefania Alborghetti 
Authored: Thu Jun 1 11:03:54 2017 +0800
Committer: Stefania Alborghetti 
Committed: Mon Jun 5 09:25:19 2017 +0800

--
 upgrade_tests/regression_test.py | 42 ++-
 1 file changed, 41 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/bbe136cd/upgrade_tests/regression_test.py
--
diff --git a/upgrade_tests/regression_test.py b/upgrade_tests/regression_test.py
index cff3c50..613d195 100644
--- a/upgrade_tests/regression_test.py
+++ b/upgrade_tests/regression_test.py
@@ -6,7 +6,8 @@ from unittest import skipUnless
 from cassandra import ConsistencyLevel as CL
 from nose.tools import assert_not_in
 
-from dtest import RUN_STATIC_UPGRADE_MATRIX
+from dtest import RUN_STATIC_UPGRADE_MATRIX, debug
+from tools.decorators import since
 from tools.jmxutils import (JolokiaAgent, make_mbean)
 from upgrade_base import UpgradeTester
 from upgrade_manifest import build_upgrade_pairs
@@ -116,6 +117,45 @@ class TestForRegressions(UpgradeTester):
 checked = True
 self.assertTrue(checked)
 
+@since('3.0.14', max_version='3.0.99')
+def test_schema_agreement(self):
+"""
+Test that nodes agree on the schema during an upgrade in the 3.0.x 
series.
+
+Create a table before upgrading the cluster and wait for schema 
agreement.
+Upgrade one node and create one more table, wait for schema agreement 
and check
+the schema versions with nodetool describecluster.
+
+We know that schemas will not necessarily agree from 2.1/2.2 to 3.0.x 
or from 3.0.x to 3.x
+and upwards, so we only test the 3.0.x series for now. We start with 
3.0.13 because
+there is a problem in 3.0.13, see CASSANDRA-12213 and 13559.
+
+@jira_ticket CASSANDRA-13559
+"""
+session = self.prepare(nodes=5)
+session.execute("CREATE TABLE schema_agreement_test_1 ( id int PRIMARY 
KEY, value text )")
+
session.cluster.control_connection.wait_for_schema_agreement(wait_time=30)
+
+def validate_schema_agreement(n, is_upgr):
+debug("querying node {} for schema information, upgraded: 
{}".format(n.name, is_upgr))
+
+response = n.nodetool('describecluster').stdout
+debug(response)
+schemas = response.split('Schema versions:')[1].strip()
+num_schemas = len(re.findall('\[.*?\]', schemas))
+self.assertEqual(num_schemas, 1, "There were multiple schema 
versions during an upgrade: {}"
+ .format(schemas))
+
+for node in self.cluster.nodelist():
+validate_schema_agreement(node, False)
+
+for is_upgraded, session, node in self.do_upgrade(session, 
return_nodes=True):
+validate_schema_agreement(node, is_upgraded)
+if is_upgraded:
+session.execute("CREATE TABLE schema_agreement_test_2 ( id int 
PRIMARY KEY, value text )")
+
session.cluster.control_connection.wait_for_schema_agreement(wait_time=30)
+validate_schema_agreement(node, is_upgraded)
+
 def compact_sstable(self, node, sstable):
 mbean = make_mbean('db', type='CompactionManager')
 with JolokiaAgent(node) as jmx:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084854#comment-16084854
 ] 

Ariel Weisberg commented on CASSANDRA-12617:


Thanks [~philipthompson]. I don't have commit access to the dtests.

I was thinking of doing this instead. Both make the test pass. I was thinking 
not increasing the number of rows might shave a few seconds off.
{code}
diff --git a/offline_tools_test.py b/offline_tools_test.py
index c0a0010..028027d 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -158,7 +158,7 @@ class TestOfflineTools(Tester):
 keys = 8 * cluster.data_dir_count
 node1.stress(['write', 'n={0}K'.format(keys), 'no-warmup',
   '-schema', 'replication(factor=1)',
-  '-col', 'n=FIXED(10)', 'SIZE=FIXED(1024)',
+  '-col', 'n=FIXED(10)', 'SIZE=FIXED(1200)',
   '-rate', 'threads=8'])

 node1.flush()
{code}

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13688) Anticompaction race can leak sstables/txn

2017-07-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13688:

Status: Patch Available  (was: Open)

Branch below. This also fixes 2 edge cases I found while diagnosing this and 
tests them

1. If a promotion/demotion compaction fails, none of the sstables that have had 
their repairedAt/pendingRepair value changed prior to the failure were being 
moved to the correct strategy.
2. Removal of the compaction strategy for a given repair session doesn't check 
that the strategy is empty before removing it. If there are sstables still in 
the strategy, they will be left in compaction limbo until the node is bounced 
(or the compaction strategy is reloaded)
3. CompactionTask wasn't validating that repaired/unrepaired/pending repair 
sstables weren't being compacted together.

[trunk|https://github.com/bdeggleston/cassandra/tree/13688]
[utests|https://circleci.com/gh/bdeggleston/cassandra/71]

> Anticompaction race can leak sstables/txn
> -
>
> Key: CASSANDRA-13688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13688
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> At the top of {{CompactionManager#performAntiCompaction}}, the parent repair 
> session is loaded, if the session can't be found, a RuntimeException is 
> thrown. This can happen if a participant is evicted after the IR prepare 
> message is received, but before the anticompaction starts. This exception is 
> thrown outside of the try/finally block that guards the sstable and lifecycle 
> transaction, causing them to leak, and preventing the sstables from ever 
> being removed from View.compacting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-12617:
---
Reviewer: Philip Thompson

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13688) Anticompaction race can leak sstables/txn

2017-07-12 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084800#comment-16084800
 ] 

Blake Eggleston edited comment on CASSANDRA-13688 at 7/12/17 10:00 PM:
---

Branch below. This also fixes 2 edge cases I found while diagnosing this, adds 
some validation to CompactionTask, and tests them

1. If a promotion/demotion compaction fails, none of the sstables that have had 
their repairedAt/pendingRepair value changed prior to the failure were being 
moved to the correct strategy.
2. Removal of the compaction strategy for a given repair session doesn't check 
that the strategy is empty before removing it. If there are sstables still in 
the strategy, they will be left in compaction limbo until the node is bounced 
(or the compaction strategy is reloaded)
3. CompactionTask wasn't validating that repaired/unrepaired/pending repair 
sstables weren't being compacted together.

[trunk|https://github.com/bdeggleston/cassandra/tree/13688]
[utests|https://circleci.com/gh/bdeggleston/cassandra/71]


was (Author: bdeggleston):
Branch below. This also fixes 2 edge cases I found while diagnosing this and 
tests them

1. If a promotion/demotion compaction fails, none of the sstables that have had 
their repairedAt/pendingRepair value changed prior to the failure were being 
moved to the correct strategy.
2. Removal of the compaction strategy for a given repair session doesn't check 
that the strategy is empty before removing it. If there are sstables still in 
the strategy, they will be left in compaction limbo until the node is bounced 
(or the compaction strategy is reloaded)
3. CompactionTask wasn't validating that repaired/unrepaired/pending repair 
sstables weren't being compacted together.

[trunk|https://github.com/bdeggleston/cassandra/tree/13688]
[utests|https://circleci.com/gh/bdeggleston/cassandra/71]

> Anticompaction race can leak sstables/txn
> -
>
> Key: CASSANDRA-13688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13688
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> At the top of {{CompactionManager#performAntiCompaction}}, the parent repair 
> session is loaded, if the session can't be found, a RuntimeException is 
> thrown. This can happen if a participant is evicted after the IR prepare 
> message is received, but before the anticompaction starts. This exception is 
> thrown outside of the try/finally block that guards the sstable and lifecycle 
> transaction, causing them to leak, and preventing the sstables from ever 
> being removed from View.compacting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13688) Anticompaction race can leak sstables/txn

2017-07-12 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13688:
---

 Summary: Anticompaction race can leak sstables/txn
 Key: CASSANDRA-13688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13688
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
 Fix For: 4.0


At the top of {{CompactionManager#performAntiCompaction}}, the parent repair 
session is loaded, if the session can't be found, a RuntimeException is thrown. 
This can happen if a participant is evicted after the IR prepare message is 
received, but before the anticompaction starts. This exception is thrown 
outside of the try/finally block that guards the sstable and lifecycle 
transaction, causing them to leak, and preventing the sstables from ever being 
removed from View.compacting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-12 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13072:
---
Status: Patch Available  (was: Reopened)

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 4.0, 3.11.0, 3.0.14
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084633#comment-16084633
 ] 

Benjamin Lerer commented on CASSANDRA-13072:


I pushed some patches to change the JNA version to {{4.2.2}} in 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:13072-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:13072-3.11]
 and [trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk] . 
I ran the patch on CI an the failing DTests are known flaky tests. 

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084604#comment-16084604
 ] 

Philip Thompson commented on CASSANDRA-12617:
-

Can't hurt. +1

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-12617:
---
Status: Patch Available  (was: Open)

Near as I can tell I can't get Apache Jenkins to run my own dtest repo and 
branch like I could with Cassci?

Here is a patch to fix this.
{code}
diff --git a/offline_tools_test.py b/offline_tools_test.py
index c0a0010..6bd9d44 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -155,7 +155,7 @@ class TestOfflineTools(Tester):
 cluster.start(wait_for_binary_proto=True)
 # test by loading large amount data so we have multiple sstables
 # must write enough to create more than just L1 sstables
-keys = 8 * cluster.data_dir_count
+keys = 10 * cluster.data_dir_count
 node1.stress(['write', 'n={0}K'.format(keys), 'no-warmup',
   '-schema', 'replication(factor=1)',
{code}

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-12617:
---
Reviewer:   (was: Ariel Weisberg)

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-12617:
--

Assignee: Ariel Weisberg  (was: Carl Yeksigian)

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13078:
---
Fix Version/s: 4.0
   3.11.1
   3.0.15

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084526#comment-16084526
 ] 

Ariel Weisberg commented on CASSANDRA-13652:


WaitQueue seems even more obtuse? It's just a condition variable we are looking 
for so why not ReentrantLock and Condition or synchronized and wait/notify?

I mean I am fine with it. People working on Cassandra need to learn how 
WaitQueue works at some point. It just seems like a performance optimization to 
avoid locking.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by 

[jira] [Updated] (CASSANDRA-12958) Cassandra Not Starting NullPointerException at org.apache.cassandra.db.index.SecondaryIndex.createInstance

2017-07-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-12958:
--
Reviewer: Andrés de la Peña

> Cassandra Not Starting NullPointerException at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance
> --
>
> Key: CASSANDRA-12958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12958
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS
>Reporter: Ashraful Islam
>Assignee: ZhaoYang
>  Labels: Bug, not_start, secondary_index
>
> Whole Process of this issue is given below : 
> # Dropped secondary index.
> # Run Repair on cluster.
> # After 15 days later of dropping index, below configuration changed in 
> Cassandra.yaml :
> index_summary_resize_interval_in_minutes: -1 
> (cause While adding nodes it was taking a lot of time to redistribute index)
> # Rolling restart all nodes.
> # While adding fresh node, live nodes were going down.
> After two nodes are down, we stopped node adding process. 
> This is the error Cassandra throws while restarting down nodes in System.log: 
> {noformat}
> INFO  [main] 2016-11-27 00:51:48,220 ColumnFamilyStore.java:382 - 
> Initializing ringid.verifiedmobile
> ERROR [main] 2016-11-27 00:51:48,236 CassandraDaemon.java:651 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance(SecondaryIndex.java:378)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.addIndexedColumn(SecondaryIndexManager.java:279)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:407) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:354) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:535)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:511)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:342) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:270) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:116) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:93) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:529)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:638) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13573) ColumnMetadata.cellValueType() doesn't return correct type for non-frozen collection

2017-07-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13573:
--
Reviewer: Andrés de la Peña

> ColumnMetadata.cellValueType() doesn't return correct type for non-frozen 
> collection
> 
>
> Key: CASSANDRA-13573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13573
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Materialized Views, Tools
>Reporter: Stefano Ortolani
>Assignee: ZhaoYang
>
> Schema and data"
> {noformat}
> CREATE TABLE ks.cf (
> hash blob,
> report_id timeuuid,
> subject_ids frozen,
> PRIMARY KEY (hash, report_id)
> ) WITH CLUSTERING ORDER BY (report_id DESC);
> INSERT INTO ks.cf (hash, report_id, subject_ids) VALUES (0x1213, now(), 
> {1,2,4,5});
> {noformat}
> sstabledump output is:
> {noformat}
> sstabledump mc-1-big-Data.db 
> [
>   {
> "partition" : {
>   "key" : [ "1213" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 16,
> "clustering" : [ "ec01eed0-49d9-11e7-b39a-97a96f529c02" ],
> "liveness_info" : { "tstamp" : "2017-06-05T10:29:57.434856Z" },
> "cells" : [
>   { "name" : "subject_ids", "value" : "" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> While the values are really there:
> {noformat}
> cqlsh:ks> select * from cf ;
>  hash   | report_id| subject_ids
> +--+-
>  0x1213 | 02bafff0-49d9-11e7-b39a-97a96f529c02 |   {1, 2, 4}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13482) NPE on non-existing row read when row cache is enabled

2017-07-12 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084017#comment-16084017
 ] 

Alex Petrov commented on CASSANDRA-13482:
-

Thank you for the review,

Committed to 3.0 with 
[7251c9559805d83423ca5ddbe4f955ce668c3d9a|https://github.com/apache/cassandra/commit/7251c9559805d83423ca5ddbe4f955ce668c3d9a]
 and merged up to 
[3.11|https://github.com/apache/cassandra/commit/29db2511621e420b8d64c867a16e317589397d36]
 and 
[trunk|https://github.com/apache/cassandra/commit/f48a319ac884ef8d6eb54db3176ea2acf627bb89].

> NPE on non-existing row read when row cache is enabled
> --
>
> Key: CASSANDRA-13482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13482
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> The problem is reproducible on 3.0 with:
> {code}
> -# row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> +row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> -row_cache_size_in_mb: 0
> +row_cache_size_in_mb: 100
> {code}
> Table setup:
> {code}
> CREATE TABLE cache_tables (pk int, v1 int, v2 int, v3 int, primary key (pk, 
> v1)) WITH CACHING = { 'keys': 'ALL', 'rows_per_partition': '1' } ;
> {code}
> No data is required, only a head query (or any pk/ck query but with full 
> partitions cached). 
> {code}
> select * from cross_page_queries where pk = 1 ;
> {code}
> {code}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.concat(UnfilteredRowIterators.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.getThroughCache(SinglePartitionReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:395) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1794)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2472)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13482) NPE on non-existing row read when row cache is enabled

2017-07-12 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13482:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NPE on non-existing row read when row cache is enabled
> --
>
> Key: CASSANDRA-13482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13482
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> The problem is reproducible on 3.0 with:
> {code}
> -# row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> +row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> -row_cache_size_in_mb: 0
> +row_cache_size_in_mb: 100
> {code}
> Table setup:
> {code}
> CREATE TABLE cache_tables (pk int, v1 int, v2 int, v3 int, primary key (pk, 
> v1)) WITH CACHING = { 'keys': 'ALL', 'rows_per_partition': '1' } ;
> {code}
> No data is required, only a head query (or any pk/ck query but with full 
> partitions cached). 
> {code}
> select * from cross_page_queries where pk = 1 ;
> {code}
> {code}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.concat(UnfilteredRowIterators.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.getThroughCache(SinglePartitionReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:395) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1794)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2472)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Make concat work with iterators that have different subsets of columns

2017-07-12 Thread ifesdjeen
Make concat work with iterators that have different subsets of columns

Patch by Alex Petrov; reviewed by Sylvain Lebresne for CASSANDRA-13482.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7251c955
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7251c955
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7251c955

Branch: refs/heads/cassandra-3.11
Commit: 7251c9559805d83423ca5ddbe4f955ce668c3d9a
Parents: 2400d07
Author: Alex Petrov 
Authored: Thu Jun 8 16:59:24 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:44:06 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 ++--
 .../cassandra/db/transform/Transformation.java  |  24 +++
 .../cassandra/db/transform/UnfilteredRows.java  |  13 ++
 .../apache/cassandra/db/RowCacheCQLTest.java|  70 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 185 +++
 12 files changed, 362 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce2324d..bf36769 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 72b4465..b4211bb 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -429,13 +429,39 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 try
 {
-int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+final int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+
 @SuppressWarnings("resource") // we close on exception or upon 
closing the result of this method
 UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, readOp);
 try
 {
+// Use a custom iterator instead of DataLimits to avoid 
stopping the original iterator
+UnfilteredRowIterator toCacheIterator = new 
WrappingUnfilteredRowIterator(iter)
+{
+private int rowsCounted = 0;
+
+@Override
+public boolean hasNext()
+{
+return rowsCounted < rowsToCache && 
super.hasNext();
+}
+
+@Override
+public Unfiltered next()
+{
+Unfiltered unfiltered = super.next();
+if (unfiltered.isRow())
+{
+Row row = (Row) unfiltered;
+if (row.hasLiveData(nowInSec()))
+rowsCounted++;
+}
+return unfiltered;
+}
+};
+
 // We want to cache only rowsToCache rows
-CachedPartition toCache = 
CachedBTreePartition.create(DataLimits.cqlLimits(rowsToCache).filter(iter, 
nowInSec()), nowInSec());
+CachedPartition toCache = 
CachedBTreePartition.create(toCacheIterator, nowInSec());
 if (sentinelSuccess && !toCache.isEmpty())
 {
 Tracing.trace("Caching {} rows", toCache.rowCount());


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-12 Thread ifesdjeen
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29db2511
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29db2511
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29db2511

Branch: refs/heads/trunk
Commit: 29db2511621e420b8d64c867a16e317589397d36
Parents: ab640b2 7251c95
Author: Alex Petrov 
Authored: Wed Jul 12 15:48:32 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:48:32 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 +--
 .../cassandra/db/transform/Transformation.java  |  23 +++
 .../cassandra/db/transform/UnfilteredRows.java  |   7 +-
 .../apache/cassandra/db/RowCacheCQLTest.java|  70 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 189 +++
 12 files changed, 358 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29db2511/CHANGES.txt
--
diff --cc CHANGES.txt
index 700a0d4,bf36769..30fa350
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,11 +1,14 @@@
 +3.11.1
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
  3.0.15
+  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29db2511/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 47c426e,b4211bb..7d72212
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -446,13 -429,39 +446,39 @@@ public class SinglePartitionReadComman
  
  try
  {
- int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+ final int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+ 
  @SuppressWarnings("resource") // we close on exception or 
upon closing the result of this method
 -UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, readOp);
 +UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, executionController);
  try
  {
+ // Use a custom iterator instead of DataLimits to avoid 
stopping the original iterator
+ UnfilteredRowIterator toCacheIterator = new 
WrappingUnfilteredRowIterator(iter)
+ {
+ private int rowsCounted = 0;
+ 
+ @Override
+ public boolean hasNext()
+ {
+ return rowsCounted < rowsToCache && 
super.hasNext();
+ }
+ 
+ @Override
+ public Unfiltered next()
+ {
+ Unfiltered unfiltered = super.next();
+ if (unfiltered.isRow())
+ {
+ Row row = (Row) unfiltered;
+ if (row.hasLiveData(nowInSec()))
+ rowsCounted++;
+ }
+ return unfiltered;
+ }
+

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-07-12 Thread ifesdjeen
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f48a319a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f48a319a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f48a319a

Branch: refs/heads/trunk
Commit: f48a319ac884ef8d6eb54db3176ea2acf627bb89
Parents: 86964da 29db251
Author: Alex Petrov 
Authored: Wed Jul 12 15:53:10 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:53:10 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 ++--
 .../cassandra/db/transform/Transformation.java  |  24 +++
 .../cassandra/db/transform/UnfilteredRows.java  |   7 +-
 .../apache/cassandra/db/RowCacheCQLTest.java|  71 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 185 +++
 12 files changed, 356 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/CHANGES.txt
--
diff --cc CHANGES.txt
index 8625d82,30fa350..3d52874
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -98,6 -2,8 +98,7 @@@
   * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
   * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
  Merged from 3.0:
 -3.0.15
+  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/transform/BaseRows.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/transform/MoreRows.java
--
diff --cc src/java/org/apache/cassandra/db/transform/MoreRows.java
index 786e215,118739b..f3856c4
--- a/src/java/org/apache/cassandra/db/transform/MoreRows.java
+++ b/src/java/org/apache/cassandra/db/transform/MoreRows.java
@@@ -20,6 -20,7 +20,7 @@@
   */
  package org.apache.cassandra.db.transform;
  
 -import org.apache.cassandra.db.PartitionColumns;
++import org.apache.cassandra.db.RegularAndStaticColumns;
  import org.apache.cassandra.db.rows.BaseRowIterator;
  import org.apache.cassandra.db.rows.RowIterator;
  import org.apache.cassandra.db.rows.UnfilteredRowIterator;
@@@ -47,6 -48,11 +48,11 @@@ public interface MoreRows more, PartitionColumns 
columns)
++public static UnfilteredRowIterator extend(UnfilteredRowIterator 
iterator, MoreRows more, RegularAndStaticColumns 
columns)
+ {
+ return add(Transformation.wrapIterator(iterator, columns), more);
+ }
+ 
  public static RowIterator extend(RowIterator iterator, MoreRows more)
  {
  return add(mutable(iterator), more);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/transform/Transformation.java
--
diff --cc src/java/org/apache/cassandra/db/transform/Transformation.java
index 811932c,77f91e4..41c76df
--- a/src/java/org/apache/cassandra/db/transform/Transformation.java
+++ b/src/java/org/apache/cassandra/db/transform/Transformation.java
@@@ -162,6 -162,6 +162,7 @@@ public abstract class Transformation E add(E to, Transformation add)
  {
  to.add(add);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f48a319a/src/java/org/apache/cassandra/db/transform/UnfilteredRows.java
--
diff --cc src/java/org/apache/cassandra/db/transform/UnfilteredRows.java
index a3dc96e,2dccad7..b8720fc
--- 

[1/6] cassandra git commit: Make concat work with iterators that have different subsets of columns

2017-07-12 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 2400d07bf -> 7251c9559
  refs/heads/cassandra-3.11 ab640b212 -> 29db25116
  refs/heads/trunk 86964da69 -> f48a319ac


Make concat work with iterators that have different subsets of columns

Patch by Alex Petrov; reviewed by Sylvain Lebresne for CASSANDRA-13482.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7251c955
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7251c955
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7251c955

Branch: refs/heads/cassandra-3.0
Commit: 7251c9559805d83423ca5ddbe4f955ce668c3d9a
Parents: 2400d07
Author: Alex Petrov 
Authored: Thu Jun 8 16:59:24 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:44:06 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 ++--
 .../cassandra/db/transform/Transformation.java  |  24 +++
 .../cassandra/db/transform/UnfilteredRows.java  |  13 ++
 .../apache/cassandra/db/RowCacheCQLTest.java|  70 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 185 +++
 12 files changed, 362 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce2324d..bf36769 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 72b4465..b4211bb 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -429,13 +429,39 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 try
 {
-int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+final int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+
 @SuppressWarnings("resource") // we close on exception or upon 
closing the result of this method
 UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, readOp);
 try
 {
+// Use a custom iterator instead of DataLimits to avoid 
stopping the original iterator
+UnfilteredRowIterator toCacheIterator = new 
WrappingUnfilteredRowIterator(iter)
+{
+private int rowsCounted = 0;
+
+@Override
+public boolean hasNext()
+{
+return rowsCounted < rowsToCache && 
super.hasNext();
+}
+
+@Override
+public Unfiltered next()
+{
+Unfiltered unfiltered = super.next();
+if (unfiltered.isRow())
+{
+Row row = (Row) unfiltered;
+if (row.hasLiveData(nowInSec()))
+rowsCounted++;
+}
+return unfiltered;
+}
+};
+
 // We want to cache only rowsToCache rows
-CachedPartition toCache = 
CachedBTreePartition.create(DataLimits.cqlLimits(rowsToCache).filter(iter, 
nowInSec()), nowInSec());
+CachedPartition toCache = 
CachedBTreePartition.create(toCacheIterator, nowInSec());
 if (sentinelSuccess && !toCache.isEmpty())
 

[3/6] cassandra git commit: Make concat work with iterators that have different subsets of columns

2017-07-12 Thread ifesdjeen
Make concat work with iterators that have different subsets of columns

Patch by Alex Petrov; reviewed by Sylvain Lebresne for CASSANDRA-13482.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7251c955
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7251c955
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7251c955

Branch: refs/heads/trunk
Commit: 7251c9559805d83423ca5ddbe4f955ce668c3d9a
Parents: 2400d07
Author: Alex Petrov 
Authored: Thu Jun 8 16:59:24 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:44:06 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 ++--
 .../cassandra/db/transform/Transformation.java  |  24 +++
 .../cassandra/db/transform/UnfilteredRows.java  |  13 ++
 .../apache/cassandra/db/RowCacheCQLTest.java|  70 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 185 +++
 12 files changed, 362 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce2324d..bf36769 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7251c955/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 72b4465..b4211bb 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -429,13 +429,39 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 try
 {
-int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+final int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+
 @SuppressWarnings("resource") // we close on exception or upon 
closing the result of this method
 UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, readOp);
 try
 {
+// Use a custom iterator instead of DataLimits to avoid 
stopping the original iterator
+UnfilteredRowIterator toCacheIterator = new 
WrappingUnfilteredRowIterator(iter)
+{
+private int rowsCounted = 0;
+
+@Override
+public boolean hasNext()
+{
+return rowsCounted < rowsToCache && 
super.hasNext();
+}
+
+@Override
+public Unfiltered next()
+{
+Unfiltered unfiltered = super.next();
+if (unfiltered.isRow())
+{
+Row row = (Row) unfiltered;
+if (row.hasLiveData(nowInSec()))
+rowsCounted++;
+}
+return unfiltered;
+}
+};
+
 // We want to cache only rowsToCache rows
-CachedPartition toCache = 
CachedBTreePartition.create(DataLimits.cqlLimits(rowsToCache).filter(iter, 
nowInSec()), nowInSec());
+CachedPartition toCache = 
CachedBTreePartition.create(toCacheIterator, nowInSec());
 if (sentinelSuccess && !toCache.isEmpty())
 {
 Tracing.trace("Caching {} rows", toCache.rowCount());


[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-12 Thread ifesdjeen
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29db2511
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29db2511
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29db2511

Branch: refs/heads/cassandra-3.11
Commit: 29db2511621e420b8d64c867a16e317589397d36
Parents: ab640b2 7251c95
Author: Alex Petrov 
Authored: Wed Jul 12 15:48:32 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 15:48:32 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/SinglePartitionReadCommand.java  |  30 ++-
 .../db/rows/UnfilteredRowIterators.java |   3 +-
 .../cassandra/db/transform/BaseIterator.java|  13 +-
 .../cassandra/db/transform/BasePartitions.java  |   2 +-
 .../apache/cassandra/db/transform/BaseRows.java |   4 +-
 .../apache/cassandra/db/transform/MoreRows.java |   6 +
 .../db/transform/StoppingTransformation.java|  31 +--
 .../cassandra/db/transform/Transformation.java  |  23 +++
 .../cassandra/db/transform/UnfilteredRows.java  |   7 +-
 .../apache/cassandra/db/RowCacheCQLTest.java|  70 +++
 .../db/rows/UnfilteredRowIteratorsTest.java | 189 +++
 12 files changed, 358 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29db2511/CHANGES.txt
--
diff --cc CHANGES.txt
index 700a0d4,bf36769..30fa350
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,11 +1,14 @@@
 +3.11.1
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
  3.0.15
+  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29db2511/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 47c426e,b4211bb..7d72212
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -446,13 -429,39 +446,39 @@@ public class SinglePartitionReadComman
  
  try
  {
- int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+ final int rowsToCache = 
metadata().params.caching.rowsPerPartitionToCache();
+ 
  @SuppressWarnings("resource") // we close on exception or 
upon closing the result of this method
 -UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, readOp);
 +UnfilteredRowIterator iter = 
SinglePartitionReadCommand.fullPartitionRead(metadata(), nowInSec(), 
partitionKey()).queryMemtableAndDisk(cfs, executionController);
  try
  {
+ // Use a custom iterator instead of DataLimits to avoid 
stopping the original iterator
+ UnfilteredRowIterator toCacheIterator = new 
WrappingUnfilteredRowIterator(iter)
+ {
+ private int rowsCounted = 0;
+ 
+ @Override
+ public boolean hasNext()
+ {
+ return rowsCounted < rowsToCache && 
super.hasNext();
+ }
+ 
+ @Override
+ public Unfiltered next()
+ {
+ Unfiltered unfiltered = super.next();
+ if (unfiltered.isRow())
+ {
+ Row row = (Row) unfiltered;
+ if (row.hasLiveData(nowInSec()))
+ rowsCounted++;
+ }
+ return unfiltered;
+ }

[jira] [Commented] (CASSANDRA-13148) Systemd support for RPM

2017-07-12 Thread Felix Paetow (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084005#comment-16084005
 ] 

Felix Paetow commented on CASSANDRA-13148:
--

a systemd approach

> Systemd support for RPM
> ---
>
> Key: CASSANDRA-13148
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13148
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Packaging
>Reporter: Spiros Ioannou
>
> Since CASSANDRA-12967 will provide an RPM file, it would be greatly 
> beneficial if this package included systemd startup unit configuration 
> instead of the current traditional init-based, which misbehaves on 
> RHEL7/CentOS7.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-07-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083764#comment-16083764
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 7/12/17 10:24 AM:


{{when updating base's columns which are not used in view.}} is about whether 
we should consider `Update semantic the same as Insert`. (for now it's not the 
same. update-statement has no primary key liveness info. pk's liveness of 
update-statement is depending on liveness of normal columns)

eg. current semantic/behavior:   
{quote}
base table: create table ks.base (p int, c int, v int, primary key (p, c))
view table:  select p,c from ks.base ... primary key (c, p)
an update query on base table normal column {{v}} will not generate any rows..
a delete query on base table normal column {{v}} will remove the base row.
{quote}

imo, to avoid user confusion, it's better to keep view matched with base 
regardless semantic of update..(the current patch is not general enough to 
cover all such cases yet)
[~tjake][~slebresne] what do you think?



was (Author: jasonstack):
{{when updating base's columns which are not used in view.}} is about whether 
we should consider `Update semantic the same as Insert`. (for now it's not the 
same. update-statement has no primary key liveness info. pk's liveness of 
update-statement is depending on liveness of normal columns)

eg.   
{quote}
base table: create table ks.base (p int, c int, v int, primary key (p, c))
view table:  select p,c from ks.base ... primary key (c, p)
an update query on base table normal column {{v}} will not generate any rows..
a delete query on base table normal column {{v}} will remove the base row.
{quote}

imo, to avoid user confusion, it's better to keep view matched with base 
regardless semantic of update..(the current patch is not general enough to 
cover all such cases yet)
[~tjake][~slebresne] what do you think?


> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>Assignee: ZhaoYang
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13436) Stopping Cassandra shows status "failed" due to non-zero exit status

2017-07-12 Thread Felix Paetow (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084002#comment-16084002
 ] 

Felix Paetow commented on CASSANDRA-13436:
--

[~muru] thank you for this, exactly what I was looking for. There's still a 
peace missing though. In our setup I have multiple services that have to start 
in the moment the cassandra database is up and reachable. Do you know a way to 
ensure this? Is there a parameter for the startup script that I missed?

Otherwise my services will fail to start because of the "Type=forking".

> Stopping Cassandra shows status "failed" due to non-zero exit status
> 
>
> Key: CASSANDRA-13436
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13436
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: Stefan Podkowinski
>
> Systemd will monitor the process from the pid file and save the return status 
> once if has been stopped. In case the process terminates with a status other 
> than zero, it will assume the process terminated abnormaly. Stopping 
> Cassandra using the cassandra script will send a kill signal to the JVM 
> causing it to terminate. If this happen, the JVM will exit with status 143, 
> no matter if shutdown hooks have been executed or not. In order to make 
> systemd recognize this as a normal exit code, the following should be added 
> to the yet to be created unit file:
> {noformat}
> [Service]
> ...
> SuccessExitStatus=0 143
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12958) Cassandra Not Starting NullPointerException at org.apache.cassandra.db.index.SecondaryIndex.createInstance

2017-07-12 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12958:
-
Status: Patch Available  (was: Open)

> Cassandra Not Starting NullPointerException at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance
> --
>
> Key: CASSANDRA-12958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12958
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS
>Reporter: Ashraful Islam
>Assignee: ZhaoYang
>  Labels: Bug, not_start, secondary_index
>
> Whole Process of this issue is given below : 
> # Dropped secondary index.
> # Run Repair on cluster.
> # After 15 days later of dropping index, below configuration changed in 
> Cassandra.yaml :
> index_summary_resize_interval_in_minutes: -1 
> (cause While adding nodes it was taking a lot of time to redistribute index)
> # Rolling restart all nodes.
> # While adding fresh node, live nodes were going down.
> After two nodes are down, we stopped node adding process. 
> This is the error Cassandra throws while restarting down nodes in System.log: 
> {noformat}
> INFO  [main] 2016-11-27 00:51:48,220 ColumnFamilyStore.java:382 - 
> Initializing ringid.verifiedmobile
> ERROR [main] 2016-11-27 00:51:48,236 CassandraDaemon.java:651 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance(SecondaryIndex.java:378)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.addIndexedColumn(SecondaryIndexManager.java:279)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:407) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:354) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:535)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:511)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:342) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:270) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:116) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:93) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:529)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:638) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083849#comment-16083849
 ] 

Branimir Lambov commented on CASSANDRA-13652:
-

The reason to prefer {{park}} for me is understandability of the code. This is 
a loop that does some work and pauses when there's no need to do any, a perfect 
candidate for park/unpark.

{{Semaphore}}, although applicable, implies something else. Our {{WaitQueue}} 
is a better alternative for this kind of application.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by Atlassian 

[jira] [Comment Edited] (CASSANDRA-12958) Cassandra Not Starting NullPointerException at org.apache.cassandra.db.index.SecondaryIndex.createInstance

2017-07-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083821#comment-16083821
 ] 

ZhaoYang edited comment on CASSANDRA-12958 at 7/12/17 11:18 AM:


| [2.2|https://github.com/jasonstack/cassandra/commits/CASSANDRA-12958] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/144] | dtest: passed |


was (Author: jasonstack):
| [2.2|https://github.com/jasonstack/cassandra/commits/CASSANDRA-12958] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/144] | [dtest] |

> Cassandra Not Starting NullPointerException at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance
> --
>
> Key: CASSANDRA-12958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12958
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS
>Reporter: Ashraful Islam
>Assignee: ZhaoYang
>  Labels: Bug, not_start, secondary_index
>
> Whole Process of this issue is given below : 
> # Dropped secondary index.
> # Run Repair on cluster.
> # After 15 days later of dropping index, below configuration changed in 
> Cassandra.yaml :
> index_summary_resize_interval_in_minutes: -1 
> (cause While adding nodes it was taking a lot of time to redistribute index)
> # Rolling restart all nodes.
> # While adding fresh node, live nodes were going down.
> After two nodes are down, we stopped node adding process. 
> This is the error Cassandra throws while restarting down nodes in System.log: 
> {noformat}
> INFO  [main] 2016-11-27 00:51:48,220 ColumnFamilyStore.java:382 - 
> Initializing ringid.verifiedmobile
> ERROR [main] 2016-11-27 00:51:48,236 CassandraDaemon.java:651 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance(SecondaryIndex.java:378)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.addIndexedColumn(SecondaryIndexManager.java:279)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:407) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:354) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:535)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:511)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:342) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:270) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:116) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:93) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:529)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:638) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12958) Cassandra Not Starting NullPointerException at org.apache.cassandra.db.index.SecondaryIndex.createInstance

2017-07-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083821#comment-16083821
 ] 

ZhaoYang commented on CASSANDRA-12958:
--

| [2.2|https://github.com/jasonstack/cassandra/commits/CASSANDRA-12958] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/144] | [dtest] |

> Cassandra Not Starting NullPointerException at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance
> --
>
> Key: CASSANDRA-12958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12958
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS
>Reporter: Ashraful Islam
>Assignee: ZhaoYang
>  Labels: Bug, not_start, secondary_index
>
> Whole Process of this issue is given below : 
> # Dropped secondary index.
> # Run Repair on cluster.
> # After 15 days later of dropping index, below configuration changed in 
> Cassandra.yaml :
> index_summary_resize_interval_in_minutes: -1 
> (cause While adding nodes it was taking a lot of time to redistribute index)
> # Rolling restart all nodes.
> # While adding fresh node, live nodes were going down.
> After two nodes are down, we stopped node adding process. 
> This is the error Cassandra throws while restarting down nodes in System.log: 
> {noformat}
> INFO  [main] 2016-11-27 00:51:48,220 ColumnFamilyStore.java:382 - 
> Initializing ringid.verifiedmobile
> ERROR [main] 2016-11-27 00:51:48,236 CassandraDaemon.java:651 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.db.index.SecondaryIndex.createInstance(SecondaryIndex.java:378)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.addIndexedColumn(SecondaryIndexManager.java:279)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:407) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:354) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:535)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:511)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:342) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:270) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:116) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:93) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:529)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:638) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-07-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083764#comment-16083764
 ] 

ZhaoYang commented on CASSANDRA-13127:
--

{{when updating base's columns which are not used in view.}} is about whether 
we should consider `Update semantic the same as Insert`. (for now it's not the 
same. update-statement has no primary key liveness info. pk's liveness of 
update-statement is depending on liveness of normal columns)

eg.   
{quote}
base table: create table ks.base (p int, c int, v int, primary key (p, c))
view table:  select p,c from ks.base ... primary key (c, p)
an update query on base table normal column {{v}} will not generate any rows..
a delete query on base table normal column {{v}} will remove the base row.
{quote}

imo, to avoid user confusion, it's better to keep view matched with base 
regardless semantic of update..(the current patch is not general enough to 
cover all such cases yet)
[~tjake][~slebresne] what do you think?


> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>Assignee: ZhaoYang
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-07-12 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13512:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-07-12 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083740#comment-16083740
 ] 

Alex Petrov commented on CASSANDRA-13512:
-

Thank you for the review & sorry for taking so long to get back. I've made the 
suggested changes to 
[dtest|https://github.com/riptano/cassandra-dtest/pull/1492]

Committed to 3.11 with 
[ab640b2123826fd67d31860a9f0ca8a4224e3845|https://github.com/apache/cassandra/commit/ab640b2123826fd67d31860a9f0ca8a4224e3845]
 and merged up to 
[trunk|https://github.com/apache/cassandra/commit/86964da69d36bdf3afd65ed77ecccd9dff24757b].

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-07-12 Thread ifesdjeen
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86964da6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86964da6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86964da6

Branch: refs/heads/trunk
Commit: 86964da69d36bdf3afd65ed77ecccd9dff24757b
Parents: 19914dc ab640b2
Author: Alex Petrov 
Authored: Wed Jul 12 12:02:44 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 12:02:44 2017 +0200

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/index/sasi/plan/Operation.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86964da6/CHANGES.txt
--
diff --cc CHANGES.txt
index 1b9d246,700a0d4..8625d82
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,102 -1,8 +1,103 @@@
 +4.0
 + * Use common nowInSec for validation compactions (CASSANDRA-13671)
 + * Improve handling of IR prepare failures (CASSANDRA-13672)
 + * Send IR coordinator messages synchronously (CASSANDRA-13673)
 + * Flush system.repair table before IR finalize promise (CASSANDRA-13660)
 + * Fix column filter creation for wildcard queries (CASSANDRA-13650)
 + * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (CASSANDRA-13614)
 + * fix race condition in PendingRepairManager (CASSANDRA-13659)
 + * Allow noop incremental repair state transitions (CASSANDRA-13658)
 + * Run repair with down replicas (CASSANDRA-10446)
 + * Added started & completed repair metrics (CASSANDRA-13598)
 + * Added started & completed repair metrics (CASSANDRA-13598)
 + * Improve secondary index (re)build failure and concurrency handling 
(CASSANDRA-10130)
 + * Improve calculation of available disk space for compaction 
(CASSANDRA-13068)
 + * Change the accessibility of RowCacheSerializer for third party row cache 
plugins (CASSANDRA-13579)
 + * Allow sub-range repairs for a preview of repaired data (CASSANDRA-13570)
 + * NPE in IR cleanup when columnfamily has no sstables (CASSANDRA-13585)
 + * Fix Randomness of stress values (CASSANDRA-12744)
 + * Allow selecting Map values and Set elements (CASSANDRA-7396)
 + * Fast and garbage-free Streaming Histogram (CASSANDRA-13444)
 + * Update repairTime for keyspaces on completion (CASSANDRA-13539)
 + * Add configurable upper bound for validation executor threads 
(CASSANDRA-13521)
 + * Bring back maxHintTTL propery (CASSANDRA-12982)
 + * Add testing guidelines (CASSANDRA-13497)
 + * Add more repair metrics (CASSANDRA-13531)
 + * RangeStreamer should be smarter when picking endpoints for streaming 
(CASSANDRA-4650)
 + * Avoid rewrapping an exception thrown for cache load functions 
(CASSANDRA-13367)
 + * Log time elapsed for each incremental repair phase (CASSANDRA-13498)
 + * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
 + * Fix incorrect cqlsh results when selecting same columns multiple times 
(CASSANDRA-13262)
 + * Fix WriteResponseHandlerTest is sensitive to test execution order 
(CASSANDRA-13421)
 + * Improve incremental repair logging (CASSANDRA-13468)
 + * Start compaction when incremental repair finishes (CASSANDRA-13454)
 + * Add repair streaming preview (CASSANDRA-13257)
 + * Cleanup isIncremental/repairedAt usage (CASSANDRA-13430)
 + * Change protocol to allow sending key space independent of query string 
(CASSANDRA-10145)
 + * Make gc_log and gc_warn settable at runtime (CASSANDRA-12661)
 + * Take number of files in L0 in account when estimating remaining compaction 
tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements 
(CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples 
(CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from 
actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Upgrade snappy-java to 1.1.2.6 (CASSANDRA-13336)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to 

[1/3] cassandra git commit: Duplicate the buffer before passing it to analyser in SASI operation

2017-07-12 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 e406700cf -> ab640b212
  refs/heads/trunk 19914dc1d -> 86964da69


Duplicate the buffer before passing it to analyser in SASI operation 

Patch by Alex Petrov; reviewed by Andrés de la Peña for CASSANDRA-13512

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab640b21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab640b21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab640b21

Branch: refs/heads/cassandra-3.11
Commit: ab640b2123826fd67d31860a9f0ca8a4224e3845
Parents: e406700
Author: Alex Petrov 
Authored: Tue May 9 14:02:48 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 12:02:11 2017 +0200

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/index/sasi/plan/Operation.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab640b21/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dc22831..700a0d4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
  * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 Merged from 3.0:
 3.0.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab640b21/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/plan/Operation.java 
b/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
index 7c744e1..aaa3068 100644
--- a/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
+++ b/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
@@ -285,7 +285,7 @@ public class Operation extends RangeIterator
 columnIndex = new ColumnIndex(controller.getKeyValidator(), 
e.column(), null);
 
 AbstractAnalyzer analyzer = columnIndex.getAnalyzer();
-analyzer.reset(e.getIndexValue());
+analyzer.reset(e.getIndexValue().duplicate());
 
 // EQ/LIKE_*/NOT_EQ can have multiple expressions e.g. text = 
"Hello World",
 // becomes text = "Hello" OR text = "World" because "space" is 
always interpreted as a split point (by analyzer),


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Duplicate the buffer before passing it to analyser in SASI operation

2017-07-12 Thread ifesdjeen
Duplicate the buffer before passing it to analyser in SASI operation 

Patch by Alex Petrov; reviewed by Andrés de la Peña for CASSANDRA-13512

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab640b21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab640b21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab640b21

Branch: refs/heads/trunk
Commit: ab640b2123826fd67d31860a9f0ca8a4224e3845
Parents: e406700
Author: Alex Petrov 
Authored: Tue May 9 14:02:48 2017 +0200
Committer: Alex Petrov 
Committed: Wed Jul 12 12:02:11 2017 +0200

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/index/sasi/plan/Operation.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab640b21/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dc22831..700a0d4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
  * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 Merged from 3.0:
 3.0.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab640b21/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/plan/Operation.java 
b/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
index 7c744e1..aaa3068 100644
--- a/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
+++ b/src/java/org/apache/cassandra/index/sasi/plan/Operation.java
@@ -285,7 +285,7 @@ public class Operation extends RangeIterator
 columnIndex = new ColumnIndex(controller.getKeyValidator(), 
e.column(), null);
 
 AbstractAnalyzer analyzer = columnIndex.getAnalyzer();
-analyzer.reset(e.getIndexValue());
+analyzer.reset(e.getIndexValue().duplicate());
 
 // EQ/LIKE_*/NOT_EQ can have multiple expressions e.g. text = 
"Hello World",
 // becomes text = "Hello" OR text = "World" because "space" is 
always interpreted as a split point (by analyzer),


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12173) Materialized View may turn on TRACING

2017-07-12 Thread Hiroshi Usami (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083715#comment-16083715
 ] 

Hiroshi Usami commented on CASSANDRA-12173:
---

Kurt-san,

No, I can't put {{nodetool settraceprobalitity}} on that environment now, where 
more than 200M rows were loaded.
We observed many Cassandra::Errors::WriteTimeoutError after a while starting 
periodic repair, and 5 hours passed some nodes went down.

{code:java}
E, [2016-07-08T07:22:37.367314 #30598] ERROR -- : Operation timed out - 
received only 0 responses. (Cassandra::Errors::WriteTimeoutError)
...
E, [2016-07-08T12:08:47.342991 #9503] ERROR -- : Cannot achieve 
consistency level ONE (Cassandra::Errors::UnavailableError)
{code}

I believe we didn't manually settraceprobalitity (> 0.0) and repair doesn't 
corrupt system_traces, but only two nodes had abnormal numbers of SSTables, so 
the repair might be able to do it.

> Materialized View may turn on TRACING
> -
>
> Key: CASSANDRA-12173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12173
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Hiroshi Usami
>
> We observed this in our test cluster(C*3.0.6), but TRAING was OFF apparently.
> After creating Materialized View, the Write count jumped up to 20K from 5K, 
> and the ViewWrite rose up to 10K.
> This is supposed to be done by MV, but some nodes which had 14,000+ SSTables 
> in the system_traces directory went down in a half day, because of running 
> out of file descriptors.
> {code}
> Counting by: find /var/lib/cassandra/data/system_traces/ -name "*-Data.db"|wc 
> -l
>   node01: 0
>   node02: 3
>   node03: 1
>   node04: 0
>   node05: 0
>   node06: 0
>   node07: 2
>   node08: 0
>   node09: 0
>   node10: 0
>   node11: 2
>   node12: 2
>   node13: 1
>   node14: 7
>   node15: 1
>   node16: 5
>   node17: 0
>   node18: 0
>   node19: 0
>   node20: 0
>   node21: 1
>   node22: 0
>   node23: 2
>   node24: 14420
>   node25: 0
>   node26: 2
>   node27: 0
>   node28: 1
>   node29: 1
>   node30: 2
>   node31: 1
>   node32: 0
>   node33: 0
>   node34: 0
>   node35: 14371
>   node36: 0
>   node37: 1
>   node38: 0
>   node39: 0
>   node40: 1
> {code}
> In node24, the sstabledump of the oldest SSTable in system_traces/events 
> directory starts with:
> {code}
> [
>   {
> "partition" : {
>   "key" : [ "e07851d0-4421-11e6-abd7-59d7f275ba79" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 30,
> "clustering" : [ "e07878e0-4421-11e6-abd7-59d7f275ba79" ],
> "liveness_info" : { "tstamp" : "2016-07-07T09:04:57.197Z", "ttl" : 
> 86400, "expires_at" : "2016-07-08T09:04:57Z", "expired" : true },
> "cells" : [
>   { "name" : "activity", "value" : "Parsing CREATE MATERIALIZED VIEW
> ...
> {code}
> So this could be the begining of TRACING ON implicitly. In node35, the oldest 
> one also starts with the "Parsing CREATE MATERIALIZED VIEW".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083702#comment-16083702
 ] 

Branimir Lambov commented on CASSANDRA-13652:
-

bq. Yes. But such other useages will eat permit and Manager Thread will be 
blocked in our code.

This is only relevant if the eating happens before checking we need to park and 
actually calling {{LockSupport.park()}}. This is indeed possible for the line 
130 call, which is why it should be removed and the control allowed to continue 
to the checked park on line 146.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level 

[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083705#comment-16083705
 ] 

Ariel Weisberg commented on CASSANDRA-13652:


Most usages of LockSupport don't introduce (as many) spurious unparks because 
they check to see of the thread is blocked on the specific condition not just 
any condition. I want to use a Semaphore so we don't introduce spurious unparks 
and trigger bugs in other usages of LockSupport.park. I don't see a huge 
advantage to working with LockSupport directly.

Yes either way blocking without checking the condition at line 130 has to go.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)

[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083687#comment-16083687
 ] 

Andrés de la Peña commented on CASSANDRA-10271:
---

Here are new versions of the patch addressing the comments:

||[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...adelapena:10271-3.11]||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10271-trunk]||

The validation is made with {{StatementRestrictions#isColumnRestrictedByEq}}, 
exactly as it is suggested by [~blerer]. I have also overridden 
[{{MultiColumnRestriction.EQRestriction#isEQ}}|https://github.com/adelapena/cassandra/blob/889b95d7a7e9a0152d34cba87fcd2fe42c5c3414/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java#L139-L143]
 to accept queries like these:
{code}
SELECT * FROM foo WHERE a = 0 AND (b, c) = (0, 0) ORDER BY c;
SELECT * FROM foo WHERE a = 0 AND (b, c) IN ((0, 0)) ORDER BY c;
{code}

The unit test is adapted to the desired behaviour and it uses 
{{assertInvalidMessage}} instead of {{assertInvalid}}.

I ran the patch on our internal CI. There are not failures for the unit tests 
and the failing dtests seem not related to the change.

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-10271:
--
Fix Version/s: 4.x
   Status: Patch Available  (was: In Progress)

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.11.x, 4.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13687) Abnormal heap growth and CPU usage during repair.

2017-07-12 Thread Stanislav Vishnevskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Vishnevskiy updated CASSANDRA-13687:
--
Summary: Abnormal heap growth and CPU usage during repair.  (was: Abnormal 
heap growth and long GC during repair.)

> Abnormal heap growth and CPU usage during repair.
> -
>
> Key: CASSANDRA-13687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
> Attachments: 3.0.14cpu.png, 3.0.14heap.png, 3.0.14.png, 
> 3.0.9heap.png, 3.0.9.png
>
>
> We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004
> Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying 
> on us. We currently don't have any data to help reproduce this, but maybe 
> since there aren't many commits between the 2 versions it might be obvious.
> Basically we trigger a parallel incremental repair from a single node every 
> night at 1AM. That node will sometimes start allocating a lot and keeping the 
> heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
> effectively destroys the whole cluster due to timeouts to this node.
> The only solution we currently have is to drain the node and restart the 
> repair, it has worked fine the second time every time.
> I attached heap charts from 3.0.9 and 3.0.14 during repair.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13687) Abnormal heap growth and long GC during repair.

2017-07-12 Thread Stanislav Vishnevskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083666#comment-16083666
 ] 

Stanislav Vishnevskiy edited comment on CASSANDRA-13687 at 7/12/17 8:58 AM:


We just had this happen again. I am attaching screenshots of similar time range 
again from before and after.

As you can see in this [^3.0.14heap.png] image at 1PM the heap spikes to 6GB, 
then we have to take down the node cause it makes the cluster start failing. We 
then proceed to change MAX_HEAP_SIZE to 12GB and bring it up again and repair. 
This time it spikes to 8GB and sticks there though the whole repair. It then 
drops down to 600MB without a huge CMS almost like it was 1 big object. The 
node calling repair (1-1) is the only one with the heap growth. If you look at 
[^3.0.9heap.png] this used to not occur during repair and all nodes looked 
similar.

Another interesting thing is CPU usage as seen in [^3.0.14cpu.png]. The node 
performing the node tool repair (in blue) is using way more CPU than the other 
nodes in the cluster. We compared this a week ago with 3.0.9 and this was also 
not true.

This feels like a bug in repair?



was (Author: stanislav):
We just had this happen again. I am attaching screenshots of similar time range 
again from before and after.

As you can see in this [^3.0.14heap.png] image at 1PM the heap spikes to 6GB, 
then we have to take down the node cause it makes the cluster start failing. We 
then proceed to change MAX_HEAP_SIZE to 12GB and bring it up again and repair. 
This time it spikes to 8GB and sticks there though the whole repair. It then 
drops down to 600MB without a huge CMS almost like it was 1 big object. The 
node calling repair (1-1) is the only one with the heap growth. If you look at 
[^3.0.9heap.png] this used to not occur during repair and all nodes looked 
similar.

Another interesting thing is CPU usage as seen in [^3.0.14cpu.png]. The node 
performing the node tool repair (in blue) is using way more CPU than the other 
node in the cluster. We compared this a week ago with 3.0.9 and this was also 
not true.

This feels like a bug in repair?


> Abnormal heap growth and long GC during repair.
> ---
>
> Key: CASSANDRA-13687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
> Attachments: 3.0.14cpu.png, 3.0.14heap.png, 3.0.14.png, 
> 3.0.9heap.png, 3.0.9.png
>
>
> We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004
> Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying 
> on us. We currently don't have any data to help reproduce this, but maybe 
> since there aren't many commits between the 2 versions it might be obvious.
> Basically we trigger a parallel incremental repair from a single node every 
> night at 1AM. That node will sometimes start allocating a lot and keeping the 
> heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
> effectively destroys the whole cluster due to timeouts to this node.
> The only solution we currently have is to drain the node and restart the 
> repair, it has worked fine the second time every time.
> I attached heap charts from 3.0.9 and 3.0.14 during repair.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13687) Abnormal heap growth and long GC during repair.

2017-07-12 Thread Stanislav Vishnevskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Vishnevskiy updated CASSANDRA-13687:
--
Attachment: 3.0.9heap.png
3.0.14heap.png
3.0.14cpu.png

We just had this happen again. I am attaching screenshots of similar time range 
again from before and after.

As you can see in this [^3.0.14heap.png] image at 1PM the heap spikes to 6GB, 
then we have to take down the node cause it makes the cluster start failing. We 
then proceed to change MAX_HEAP_SIZE to 12GB and bring it up again and repair. 
This time it spikes to 8GB and sticks there though the whole repair. It then 
drops down to 600MB without a huge CMS almost like it was 1 big object. The 
node calling repair (1-1) is the only one with the heap growth. If you look at 
[^3.0.9heap.png] this used to not occur during repair and all nodes looked 
similar.

Another interesting thing is CPU usage as seen in [^3.0.14cpu.png]. The node 
performing the node tool repair (in blue) is using way more CPU than the other 
node in the cluster. We compared this a week ago with 3.0.9 and this was also 
not true.

This feels like a bug in repair?


> Abnormal heap growth and long GC during repair.
> ---
>
> Key: CASSANDRA-13687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
> Attachments: 3.0.14cpu.png, 3.0.14heap.png, 3.0.14.png, 
> 3.0.9heap.png, 3.0.9.png
>
>
> We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004
> Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying 
> on us. We currently don't have any data to help reproduce this, but maybe 
> since there aren't many commits between the 2 versions it might be obvious.
> Basically we trigger a parallel incremental repair from a single node every 
> night at 1AM. That node will sometimes start allocating a lot and keeping the 
> heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
> effectively destroys the whole cluster due to timeouts to this node.
> The only solution we currently have is to drain the node and restart the 
> repair, it has worked fine the second time every time.
> I attached heap charts from 3.0.9 and 3.0.14 during repair.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >