[jira] [Resolved] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash

2018-04-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-14284.

   Resolution: Fixed
Fix Version/s: 3.11.3
   3.0.17
   2.2.13
   2.1.21
   4.0

Committed into 2.1 at 34a1d5da58fb8edcad39633084541bb4162f5ede and merged into 
2.2, 3.0, 3.11 and trunk.

> Chunk checksum test needs to occur before uncompress to avoid JVM crash
> ---
>
> Key: CASSANDRA-14284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: The check-only-after-doing-the-decompress logic appears 
> to be in all current releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]
>  
>Reporter: Gil Tene
>Assignee: Benjamin Lerer
>Priority: Major
> Fix For: 4.0, 2.1.21, 2.2.13, 3.0.17, 3.11.3
>
>
> While checksums are (generally) performed on compressed data, the checksum 
> test when reading is currently (in all variants of C* 2.x, 3.x I've looked 
> at) done [on the compressed data] only after the uncompress operation has 
> completed. 
> The issue here is that LZ4_decompress_fast (as documented in e.g. 
> [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory 
> overruns when provided with malformed source data. This in turn can (and 
> does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of 
> corrupted chunks. The checksum operation would obviously detect the issue, 
> but we'd never get to it if the JVM crashes first.
> Moving the checksum test of the compressed data to before the uncompress 
> operation (in cases where the checksum is done on compressed data) will 
> resolve this issue.
> -
> The check-only-after-doing-the-decompress logic appears to be in all current 
> releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/2] cassandra-dtest git commit: increase ttl to make sure self.update_view does not take longer than the ttl

2018-04-10 Thread marcuse
increase ttl to make sure self.update_view does not take longer than the ttl

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-14148


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/3a4b5d98
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/3a4b5d98
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/3a4b5d98

Branch: refs/heads/master
Commit: 3a4b5d98e60f0087508df26dd75ab24c032c7760
Parents: af2e55e
Author: Marcus Eriksson 
Authored: Thu Jan 18 16:27:08 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 14:10:11 2018 +0200

--
 materialized_views_test.py | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/3a4b5d98/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 3836ef7..eaae8dd 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -1414,19 +1414,19 @@ class TestMaterializedViews(Tester):
 assert_one(session, "SELECT * FROM t", [1, 1, 1, None, None, None])
 assert_one(session, "SELECT * FROM mv", [1, 1, 1, None])
 
-# add selected with ttl=5
-self.update_view(session, "UPDATE t USING TTL 10 SET a=1 WHERE k=1 AND 
c=1;", flush)
+# add selected with ttl=20 (we apparently need a long ttl because the 
flushing etc that self.update_view does can take a long time)
+self.update_view(session, "UPDATE t USING TTL 20 SET a=1 WHERE k=1 AND 
c=1;", flush)
 assert_one(session, "SELECT * FROM t", [1, 1, 1, None, None, None])
 assert_one(session, "SELECT * FROM mv", [1, 1, 1, None])
 
-time.sleep(10)
+time.sleep(20)
 
 # update unselected with ttl=10, view row should be alive
-self.update_view(session, "UPDATE t USING TTL 10 SET f=1 WHERE k=1 AND 
c=1;", flush)
+self.update_view(session, "UPDATE t USING TTL 20 SET f=1 WHERE k=1 AND 
c=1;", flush)
 assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
 assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
 
-time.sleep(10)
+time.sleep(20)
 
 # view row still alive due to base livenessInfo
 assert_none(session, "SELECT * FROM t")


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14371) dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-14371:
---

Assignee: Patrick Bannister

> dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split
> 
>
> Key: CASSANDRA-14371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14371
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Patrick Bannister
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/489/testReport/sstablesplit_test/TestSSTableSplit/test_single_file_split/
> {code}
> for (stdout, stderr, rc) in result:
> logger.debug(stderr)
> >   failure = stderr.find("java.lang.AssertionError: Data component 
> > is missing")
> E   TypeError: a bytes-like object is required, not 'str'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/2] cassandra-dtest git commit: cant add DC before it has any nodes, also need to run queries at LOCAL_ONE to make sure we dont read from dc1. And to get the data to dc2 we need to run rebuild

2018-04-10 Thread marcuse
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master dac3d7535 -> 3a4b5d98e


cant add DC before it has any nodes, also need to run queries at LOCAL_ONE to 
make sure we dont read from dc1. And to get the data to dc2 we need to run 
rebuild

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-14023


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/af2e55ea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/af2e55ea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/af2e55ea

Branch: refs/heads/master
Commit: af2e55eae12a26acc07ce52d8f8c617b77bb4156
Parents: dac3d75
Author: Marcus Eriksson 
Authored: Tue Jan 16 10:45:24 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 14:08:29 2018 +0200

--
 materialized_views_test.py | 17 -
 1 file changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/af2e55ea/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 7771f9d..3836ef7 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -424,7 +424,7 @@ class TestMaterializedViews(Tester):
 result = list(session.execute("SELECT * FROM 
ks.users_by_state_birth_year WHERE state='TX' AND birth_year=1968"))
 assert len(result) == 1, "Expecting {} users, got {}".format(1, 
len(result))
 
-def _add_dc_after_mv_test(self, rf):
+def _add_dc_after_mv_test(self, rf, nts):
 """
 @jira_ticket CASSANDRA-10978
 
@@ -456,9 +456,16 @@ class TestMaterializedViews(Tester):
 
 logger.debug("Bootstrapping new node in another dc")
 node5 = new_node(self.cluster, remote_debug_port='1414', 
data_center='dc2')
-
node5.start(jvm_args=["-Dcassandra.migration_task_wait_in_seconds={}".format(MIGRATION_WAIT)])
+
node5.start(jvm_args=["-Dcassandra.migration_task_wait_in_seconds={}".format(MIGRATION_WAIT)],
 wait_other_notice=True, wait_for_binary_proto=True)
+if nts:
+session.execute("alter keyspace ks with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+session.execute("alter keyspace system_auth with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+session.execute("alter keyspace system_traces with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+node4.nodetool('rebuild dc1')
+node5.nodetool('rebuild dc1')
 
-session2 = self.patient_exclusive_cql_connection(node4)
+cl = ConsistencyLevel.LOCAL_ONE if nts else ConsistencyLevel.ONE
+session2 = self.patient_exclusive_cql_connection(node4, 
consistency_level=cl)
 
 logger.debug("Verifying data from new node in view")
 for i in range(1000):
@@ -480,7 +487,7 @@ class TestMaterializedViews(Tester):
 Test that materialized views work as expected when adding a datacenter 
with SimpleStrategy.
 """
 
-self._add_dc_after_mv_test(1)
+self._add_dc_after_mv_test(1, False)
 
 @pytest.mark.resource_intensive
 def test_add_dc_after_mv_network_replication(self):
@@ -490,7 +497,7 @@ class TestMaterializedViews(Tester):
 Test that materialized views work as expected when adding a datacenter 
with NetworkTopologyStrategy.
 """
 
-self._add_dc_after_mv_test({'dc1': 1, 'dc2': 1})
+self._add_dc_after_mv_test({'dc1': 1}, True)
 
 @pytest.mark.resource_intensive
 def test_add_node_after_mv(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14023) add_dc_after_mv_network_replication_test - materialized_views_test.TestMaterializedViews fails due to invalid datacenter

2018-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432163#comment-16432163
 ] 

Marcus Eriksson commented on CASSANDRA-14023:
-

committed, thanks!

> add_dc_after_mv_network_replication_test - 
> materialized_views_test.TestMaterializedViews fails due to invalid datacenter
> 
>
> Key: CASSANDRA-14023
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14023
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> add_dc_after_mv_network_replication_test - 
> materialized_views_test.TestMaterializedViews always fails due to:
>  message="Unrecognized strategy option {dc2} passed to NetworkTopologyStrategy 
> for keyspace ks">



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14148) test_no_base_column_in_view_pk_complex_timestamp_with_flush - materialized_views_test.TestMaterializedViews frequently fails in CI

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14148:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed, thanks!

> test_no_base_column_in_view_pk_complex_timestamp_with_flush - 
> materialized_views_test.TestMaterializedViews frequently fails in CI
> --
>
> Key: CASSANDRA-14148
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14148
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> test_no_base_column_in_view_pk_complex_timestamp_with_flush - 
> materialized_views_test.TestMaterializedViews frequently fails in CI
> self =  0x7f849b25cf60>
> @since('3.0')
> def test_no_base_column_in_view_pk_complex_timestamp_with_flush(self):
> >   self._test_no_base_column_in_view_pk_complex_timestamp(flush=True)
> materialized_views_test.py:970: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> materialized_views_test.py:1066: in 
> _test_no_base_column_in_view_pk_complex_timestamp
> assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> session = 
> query = 'SELECT * FROM t', expected = [1, 1, None, None, None, 1], cl = None
> def assert_one(session, query, expected, cl=None):
> """
> Assert query returns one row.
> @param session Session to use
> @param query Query to run
> @param expected Expected results from query
> @param cl Optional Consistency Level setting. Default ONE
> 
> Examples:
> assert_one(session, "LIST USERS", ['cassandra', True])
> assert_one(session, query, [0, 0])
> """
> simple_query = SimpleStatement(query, consistency_level=cl)
> res = session.execute(simple_query)
> list_res = _rows_to_list(res)
> >   assert list_res == [expected], "Expected {} from {}, but got 
> > {}".format([expected], query, list_res)
> E   AssertionError: Expected [[1, 1, None, None, None, 1]] from SELECT * 
> FROM t, but got []



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14369) infinite loop when decommission a node

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432238#comment-16432238
 ] 

Paulo Motta commented on CASSANDRA-14369:
-

Since you are using multiple disks (JBOD) this looks similar to 
CASSANDRA-13948, would you mind upgrade to 3.11.2 and see if the issue is 
happening there?

> infinite loop when decommission a node
> --
>
> Key: CASSANDRA-14369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Daniel Woo
>Priority: Major
> Fix For: 3.11.1
>
>
> I have 6 nodes (N1 to N6), N2 to N6 are new hardwares with two SSDs on each, 
> N1 is an old box with spinning disks, and I am trying to decommission N1. 
> Then I see two nodes are trying to receive streaming from N1 infinitely. The 
> log rotates so quickly that I can only see this:
>  
> {{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}
> nodetool tpstats shows some of the compactions are pending:
>  
> {{Pool Name                         Active   Pending      Completed   Blocked 
>  All time blocked}}{{ReadStage                              0         0       
>  1366419         0                 0}}{{MiscStage                             
>  0         0              0         0                 0}}{{CompactionExecutor 
>                     9         9          77739         0                 
> 0}}{{MutationStage                          0         0        7504702        
>  0                 0}}{{MemtableReclaimMemory                  0         0    
>         327         0                 0}}{{PendingRangeCalculator             
>     0         0             20         0                 0}}{{GossipStage     
>                        0         0         486365         0                 
> 0}}{{SecondaryIndexManagement               0         0              0        
>  0                 0}}
>  
> This is from the jstack output:
> {{"CompactionExecutor:1" #26533 daemon prio=1 os_prio=4 
> tid=0x7f971812f170 nid=0x6581 waiting for monitor entry 
> 

[jira] [Commented] (CASSANDRA-14303) NetworkTopologyStrategy could have a "default replication" option

2018-04-10 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432331#comment-16432331
 ] 

Jeremiah Jordan commented on CASSANDRA-14303:
-

[~snazy] see conversation above: 
https://issues.apache.org/jira/browse/CASSANDRA-14303?focusedCommentId=16393438=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16393438
bq. Yes, that edge case as well as others (gossip inconsistency mostly) is why 
I propose only evaluating the DCs at the time of a CREATE or ALTER statement 
execution.

> NetworkTopologyStrategy could have a "default replication" option
> -
>
> Key: CASSANDRA-14303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14303
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
>
> Right now when creating a keyspace with {{NetworkTopologyStrategy}} the user 
> has to manually specify the datacenters they want their data replicated to 
> with parameters, e.g.:
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 3, 'dc2': 3}{noformat}
> This is a poor user interface because it requires the creator of the keyspace 
> (typically a developer) to know the layout of the Cassandra cluster (which 
> may or may not be controlled by them). Also, at least in my experience, folks 
> typo the datacenters _all_ the time. To work around this I see a number of 
> users creating automation around this where the automation describes the 
> Cassandra cluster and automatically expands out to all the dcs that Cassandra 
> knows about. Why can't Cassandra just do this for us, re-using the previously 
> forbidden {{replication_factor}} option (for backwards compatibility):
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> This would automatically replicate this Keyspace to all datacenters that are 
> present in the cluster. If you need to _override_ the default you could 
> supply a datacenter name, e.g.:
> {noformat}
> > CREATE KEYSPACE test WITH replication = {'class': 
> > 'NetworkTopologyStrategy', 'replication_factor': 3, 'dc1': 2}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '2', 'dc2': 3} AND durable_writes = true;
> {noformat}
> On the implementation side I think this may be reasonably straightforward to 
> do an auto-expansion at the time of keyspace creation (or alter), where the 
> above would automatically expand to list out the datacenters. We could allow 
> this to be recomputed whenever an AlterKeyspaceStatement runs so that to add 
> datacenters you would just run:
> {noformat}
> ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> and this would check that if the dc's in the current schema are different you 
> add in the new ones (_for safety reasons we'd never remove non explicitly 
> supplied zero dcs when auto-generating dcs_). Removing a datacenter becomes 
> an alter that includes an override for the dc you want to remove (or of 
> course you can always not use the auto-expansion and just use the old way):
> {noformat}
> // Tell it explicitly not to replicate to dc2
> > ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> > 'replication_factor': 3, 'dc2': 0}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '3'} AND durable_writes = true;{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19e329eb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19e329eb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19e329eb

Branch: refs/heads/trunk
Commit: 19e329eb5c124d2e37b52052e8622f0515f058b7
Parents: c1020d6 edcb90f
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:26:30 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:26:30 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/CHANGES.txt
--
diff --cc CHANGES.txt
index c4f05d5,94b2276..e0145d4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -3.0.17
 +3.11.3
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Handle all exceptions when opening sstables (CASSANDRA-14202)
   * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
   * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
   * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b5dbc04b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b5dbc04b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b5dbc04b

Branch: refs/heads/trunk
Commit: b5dbc04bda0479367d89d1e406b09fa187bf7aad
Parents: 0b16546 19e329e
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:28:29 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:28:29 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5dbc04b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5dbc04b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/trunk
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[2/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/cassandra-3.11
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[1/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 73ca0e1e1 -> edcb90f08
  refs/heads/cassandra-3.11 c1020d62e -> 19e329eb5
  refs/heads/trunk 0b16546f6 -> b5dbc04bd


Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/cassandra-3.0
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19e329eb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19e329eb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19e329eb

Branch: refs/heads/cassandra-3.11
Commit: 19e329eb5c124d2e37b52052e8622f0515f058b7
Parents: c1020d6 edcb90f
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:26:30 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:26:30 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/CHANGES.txt
--
diff --cc CHANGES.txt
index c4f05d5,94b2276..e0145d4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -3.0.17
 +3.11.3
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Handle all exceptions when opening sstables (CASSANDRA-14202)
   * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
   * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
   * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14202) Assertion error on sstable open during startup should invoke disk failure policy

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14202:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   3.11.3
   3.0.17
   4.0
   Status: Resolved  (was: Ready to Commit)

committed as {{edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc}} and merged up with a 
small change:
[this|https://github.com/krummas/cassandra/blob/marcuse/handle_throwable/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L501-L510]
 got folded up like 
[this|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L514-L518]

> Assertion error on sstable open during startup should invoke disk failure 
> policy
> 
>
> Key: CASSANDRA-14202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14202
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> We should catch all exceptions when opening sstables on startup and invoke 
> the disk failure policy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14371) dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split

2018-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432181#comment-16432181
 ] 

Marcus Eriksson commented on CASSANDRA-14371:
-

lets wait for the ccm fix

ping [~philipthompson]

> dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split
> 
>
> Key: CASSANDRA-14371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14371
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Patrick Bannister
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/489/testReport/sstablesplit_test/TestSSTableSplit/test_single_file_split/
> {code}
> for (stdout, stderr, rc) in result:
> logger.debug(stderr)
> >   failure = stderr.find("java.lang.AssertionError: Data component 
> > is missing")
> E   TypeError: a bytes-like object is required, not 'str'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-6719) redesign loadnewsstables

2018-04-10 Thread Jordan West (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-6719:
---
Reviewer: Jordan West

> redesign loadnewsstables
> 
>
> Key: CASSANDRA-6719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6719
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 6719.patch
>
>
> CFSMBean.loadNewSSTables scans data directories for new sstables dropped 
> there by an external agent.  This is dangerous because of possible filename 
> conflicts with existing or newly generated sstables.
> Instead, we should support leaving the new sstables in a separate directory 
> (specified by a parameter, or configured as a new location in yaml) and take 
> care of renaming as necessary automagically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432532#comment-16432532
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


We shouldn't change the output of schema versions because that already existed 
and people might be parsing it.

Also I think it might make sense to put the new output after the existing 
output and put a line break in between it. We might already break parsing that 
people are doing since they might just read to the end to get the schema 
versions, but at least that will be easier to fix.

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432547#comment-16432547
 ] 

Jon Haddad commented on CASSANDRA-13853:


{quote}
We shouldn't change the output of schema versions because that already existed 
and people might be parsing it.
{quote}

Since this is going into 4.0, is meant to be human readable, and we already 
have a programmatic means of getting this info (jmx), I'm OK with breaking 
changes if they are an improvement to it's readability.  

Yes, people are parsing nodetool.  It's a bummer.  On the upside, I think that 
virtual tables will be able to take over most of the duty of programmatic 
interface with the DB. 

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14310) Don't allow nodetool refresh before cfs is opened

2018-04-10 Thread Jordan West (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14310:

Reviewer: Jordan West  (was: Sam Tunnicliffe)

> Don't allow nodetool refresh before cfs is opened
> -
>
> Key: CASSANDRA-14310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14310
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There is a potential deadlock in during startup if nodetool refresh is called 
> while sstables are being opened. We should not allow refresh to be called 
> before everything is initialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14310) Don't allow nodetool refresh before cfs is opened

2018-04-10 Thread Jordan West (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432529#comment-16432529
 ] 

Jordan West commented on CASSANDRA-14310:
-

+1. Agreed on keeping the initialized check as well. None of the dtest failures 
look related and the new dtest looks good.

> Don't allow nodetool refresh before cfs is opened
> -
>
> Key: CASSANDRA-14310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14310
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There is a potential deadlock in during startup if nodetool refresh is called 
> while sstables are being opened. We should not allow refresh to be called 
> before everything is initialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432541#comment-16432541
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


[~aweisberg] Does the bellow output look okay? If so, I will push the patch.
{code:java}
Cluster Information:
  Name: Test Cluster
  Snitch: org.apache.cassandra.locator.SimpleSnitch
  DynamicEndPointSnitch: enabled
  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
  Schema versions: 
b18cff54-5b52-3afd-b6cf-bb923b695e73: [127.0.0.1]

Stats for all nodes:
  Live: 1
  Joining: 0
  Moving: 0
  Leaving: 0
  Unreachable: 0
Data Centers: 
  datacenter1 #Nodes: 1 #Down: 0
Keyspaces:
  system_schema -> Replication class: LocalStrategy {}
  system -> Replication class: LocalStrategy {}
  system_auth -> Replication class: SimpleStrategy {replication_factor=1}
  system_distributed -> Replication class: SimpleStrategy 
{replication_factor=3}
  system_traces -> Replication class: SimpleStrategy 
{replication_factor=2}{code}

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432577#comment-16432577
 ] 

Ariel Weisberg edited comment on CASSANDRA-13853 at 4/10/18 4:55 PM:
-

I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/59b5b6bef0fa76bf5740b688fcd4d9cf525760d0/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154


was (Author: aweisberg):
I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432555#comment-16432555
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


Output looks good to me. I guess you can lose the newline since people aren't 
supposed to be parsing this really. I can fix that when I commit it.

It looks like this still doesn't have the Cassandra binary versions Jon was 
asking for?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432564#comment-16432564
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


[~rustyrazorblade] [~aweisberg] So do we want to retain the old output of 
schema versions as shown in my last result above?

Also, what Cassandra binary versions are you referring?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13426) Make all DDL statements idempotent and not dependent on global state

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432771#comment-16432771
 ] 

Aleksey Yeschenko commented on CASSANDRA-13426:
---

Rebased on top of most recent trunk. There are some test failures that need to 
be fixed and review feedback that still needs to be addressed, and I guess some 
extra tests to write (although most of it is covered with various unit and 
dtests).

[~ifesdjeen] You worked on {{SUPER}} and {{DENSE}} flags removal.. when you 
have time, can you look at a small commit 
[here|https://github.com/iamaleksey/cassandra/commits/13426] titled 'Get rid of 
COMPACT STORAGE logic in DDL statements' please? Not referencing the sha as I'm 
still force-pushing here occasionally. Thanks.

> Make all DDL statements idempotent and not dependent on global state
> 
>
> Key: CASSANDRA-13426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13426
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> A follow-up to CASSANDRA-9425 and a pre-requisite for CASSANDRA-10699.
> It's necessary for the latter to be able to apply any DDL statement several 
> times without side-effects. As part of the ticket I think we should also 
> clean up validation logic and our error texts. One example is varying 
> treatment of missing keyspace for DROP TABLE/INDEX/etc. statements with IF 
> EXISTS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Reproduced In: 4.0
   Status: Patch Available  (was: Open)

Attached patch that forces {{US}} locale when generating 
{{PercentileSpeculativeRetryPolicy}} representation. Mind reviewing 
[~iamaleksey] or [~mkjellman] ?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)
Paulo Motta created CASSANDRA-14374:
---

 Summary: Speculative retry parsing breaks on non-english locale
 Key: CASSANDRA-14374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
 Project: Cassandra
  Issue Type: Bug
Reporter: Paulo Motta
Assignee: Paulo Motta


I was getting the following error when running unit tests on my machine:
{code:none}
Error setting schema for test (query was: CREATE TABLE 
cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
at 
org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
Speculative Retry Policy [99,00p] is not supported
at 
org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
at 
org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
at 
org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
at 
org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
{code}
It turns out that my machine is configured with {{pt_BR}} locale, which uses 
comma instead of dot for decimal separator, so the speculative retry option 
parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
working.

To reproduce on Linux:
{code:none}
export LC_CTYPE=pt_BR.UTF-8
ant test -Dtest.name="DeleteTest"
ant test -Dtest.name="SpeculativeRetryParseTest"
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432844#comment-16432844
 ] 

Aleksey Yeschenko commented on CASSANDRA-14374:
---

[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}}. And check it in pt_BR locale as 
well as en_US?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432749#comment-16432749
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


Here is the new output. I will upload the patch if it looks okay.
{code:java}
Cluster Information:
  Name: Test Cluster
  Snitch: org.apache.cassandra.locator.SimpleSnitch
  DynamicEndPointSnitch: enabled
  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
  Schema versions:
b18cff54-5b52-3afd-b6cf-bb923b695e73: [127.0.0.1]

Stats for all nodes:
  Live: 1
  Joining: 0
  Moving: 0
  Leaving: 0
  Unreachable: 0

Data Centers: 
  datacenter1 #Nodes: 1 #Down: 0

Database versions:
  4.0.0: [127.0.0.1:7000]

Keyspaces:
  system_schema -> Replication class: LocalStrategy {}
  system -> Replication class: LocalStrategy {}
  system_auth -> Replication class: SimpleStrategy {replication_factor=1}
  system_distributed -> Replication class: SimpleStrategy 
{replication_factor=3}
  system_traces -> Replication class: SimpleStrategy 
{replication_factor=2}{code}

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432814#comment-16432814
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


That looks good. Upload the patch and I will try out the dtest. 3 dtests is a 
bit much to add for this. They are very very slow and I don't want to add that 
many if I can avoid it. I think it should also go into the existing 
nodetool_test.py? It's no that big yet so I don't think we need to break up 
nodetool tests into multiple files.


Maybe just add the 3 datacenter case?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14374:
--
Reviewer: Aleksey Yeschenko

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14374:
--
Fix Version/s: 4.0

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432844#comment-16432844
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-14374 at 4/10/18 7:42 PM:


[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}} (instead of {{99p}}). And check 
it in pt_BR locale as well as en_US?


was (Author: iamaleksey):
[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}}. And check it in pt_BR locale as 
well as en_US?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13459) Diag. Events: Native transport integration

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432544#comment-16432544
 ] 

Ariel Weisberg commented on CASSANDRA-13459:


So I was just thinking that forward looking restricting this mechanism to 
diagnostic events might not make sense. I was thinking a more generic 
subscription mechanism where diagnostic are events is a subset of what clients 
can conditionally subscribe to means we don't end up with naming issues in the 
future.

For V1 of this functionality my only sticking point is that even with 1-2 
clients consuming diagnostic events we have to handle backpressure somehow. 
AFAIK we hold onto messages pending to a client for a while (indefinitely?). I 
am not actually sure what kind fo timeouts or health checks we do for clients.

All the other stuff I mentioned in CASSANDRA-12151 is not really necessary for 
V1 if it does what you need today.

> Diag. Events: Native transport integration
> --
>
> Key: CASSANDRA-13459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13459
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>  Labels: client-impacting
>
> Events should be consumable by clients that would received subscribed events 
> from the connected node. This functionality is designed to work on top of 
> native transport with minor modifications to the protocol standard (see 
> [original 
> proposal|https://docs.google.com/document/d/1uEk7KYgxjNA0ybC9fOuegHTcK3Yi0hCQN5nTp5cNFyQ/edit?usp=sharing]
>  for further considered options). First we have to add another value for 
> existing event types. Also, we have to extend the protocol a bit to be able 
> to specify a sub-class and sub-type value. E.g. 
> {{DIAGNOSTIC_EVENT(GossiperEvent, MAJOR_STATE_CHANGE_HANDLED)}}. This still 
> has to be worked out and I'd appreciate any feedback.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432577#comment-16432577
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432595#comment-16432595
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


Ah. I missed that one out. I will work on adding that and give an update. 
Thanks!

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13698) Reinstate or get rid of unit tests with multiple compaction strategies

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433069#comment-16433069
 ] 

Paulo Motta commented on CASSANDRA-13698:
-

The patch looks good and the failures look unrelated but I just noticed that 
there are a bunch of other commented-out tests on {{CompactionsTest}} like 
{{testEchoedRow}}, {{testRangeTombstones}}, 
{{testUncheckedTombstoneSizeTieredCompaction}},etc that are like this since 
CASSANDRA-8099. Even though this was not in the original ticket scope, I think 
we should also triage those tests and either remove or restore them. WDYT?

BTW, we should only perform the scts to stcs rename on the trunk patch, since 
this is a change of public interface and there might be external code relying 
on the wrong naming.

> Reinstate or get rid of unit tests with multiple compaction strategies
> --
>
> Key: CASSANDRA-13698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13698
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Paulo Motta
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Attachments: 13698-3.0.txt, 13698-3.11.txt, 13698-trunk.txt
>
>
> At some point there were (anti-)compaction tests with multiple compaction 
> strategy classes, but now it's only tested with {{STCS}}:
> * 
> [AnticompactionTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java#L247]
> * 
> [CompactionsTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java#L85]
> We should either reinstate these tests or decide they are not important and 
> remove the unused parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14354) rename ColumnFamilyStoreCQLHelper to TableCQLHelper

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433161#comment-16433161
 ] 

Jon Haddad commented on CASSANDRA-14354:


Not expecting any issues since this is just a simple rename, but I'm running it 
through [CircleCI|https://circleci.com/gh/rustyrazorblade/cassandra/15] anyways.

> rename ColumnFamilyStoreCQLHelper to TableCQLHelper
> ---
>
> Key: CASSANDRA-14354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14354
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jon Haddad
>Assignee: Venkata Harikrishna Nukala
>Priority: Major
> Attachments: 14354-trunk.txt
>
>
> Seems like a simple 1:1 rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14358) OutboundTcpConnection can hang for many minutes when nodes restart

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433171#comment-16433171
 ] 

Ariel Weisberg edited comment on CASSANDRA-14358 at 4/10/18 11:20 PM:
--

30 seconds (effectively 60) and a hot prop sounds excellent.


was (Author: aweisberg):
30 seconds and a hot prop sounds excellent.

> OutboundTcpConnection can hang for many minutes when nodes restart
> --
>
> Key: CASSANDRA-14358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.19 (also reproduced on 3.0.15), running 
> with {{internode_encryption: all}} and the EC2 multi region snitch on Linux 
> 4.13 within the same AWS region. Smallest cluster I've seen the problem on is 
> 12 nodes, reproduces more reliably on 40+ and 300 node clusters consistently 
> reproduce on at least one node in the cluster.
> So all the connections are SSL and we're connecting on the internal ip 
> addresses (not the public endpoint ones).
> Potentially relevant sysctls:
> {noformat}
> /proc/sys/net/ipv4/tcp_syn_retries = 2
> /proc/sys/net/ipv4/tcp_synack_retries = 5
> /proc/sys/net/ipv4/tcp_keepalive_time = 7200
> /proc/sys/net/ipv4/tcp_keepalive_probes = 9
> /proc/sys/net/ipv4/tcp_keepalive_intvl = 75
> /proc/sys/net/ipv4/tcp_retries2 = 15
> {noformat}
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
> Attachments: 10 Minute Partition.pdf
>
>
> I've been trying to debug nodes not being able to see each other during 
> longer (~5 minute+) Cassandra restarts in 3.0.x and 2.1.x which can 
> contribute to {{UnavailableExceptions}} during rolling restarts of 3.0.x and 
> 2.1.x clusters for us. I think I finally have a lead. It appears that prior 
> to trunk (with the awesome Netty refactor) we do not set socket connect 
> timeouts on SSL connections (in 2.1.x, 3.0.x, or 3.11.x) nor do we set 
> {{SO_TIMEOUT}} as far as I can tell on outbound connections either. I believe 
> that this means that we could potentially block forever on {{connect}} or 
> {{recv}} syscalls, and we could block forever on the SSL Handshake as well. I 
> think that the OS will protect us somewhat (and that may be what's causing 
> the eventual timeout) but I think that given the right network conditions our 
> {{OutboundTCPConnection}} threads can just be stuck never making any progress 
> until the OS intervenes.
> I have attached some logs of such a network partition during a rolling 
> restart where an old node in the cluster has a completely foobarred 
> {{OutboundTcpConnection}} for ~10 minutes before finally getting a 
> {{java.net.SocketException: Connection timed out (Write failed)}} and 
> immediately successfully reconnecting. I conclude that the old node is the 
> problem because the new node (the one that restarted) is sending ECHOs to the 
> old node, and the old node is sending ECHOs and REQUEST_RESPONSES to the new 
> node's ECHOs, but the new node is never getting the ECHO's. This appears, to 
> me, to indicate that the old node's {{OutboundTcpConnection}} thread is just 
> stuck and can't make any forward progress. By the time we could notice this 
> and slap TRACE logging on, the only thing we see is ~10 minutes later a 
> {{SocketException}} inside {{writeConnected}}'s flush and an immediate 
> recovery. It is interesting to me that the exception happens in 
> {{writeConnected}} and it's a _connection timeout_ (and since we see {{Write 
> failure}} I believe that this can't be a connection reset), because my 
> understanding is that we should have a fully handshaked SSL connection at 
> that point in the code.
> Current theory:
>  # "New" node restarts,  "Old" node calls 
> [newSocket|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L433]
>  # Old node starts [creating a 
> new|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L141]
>  SSL socket 
>  # SSLSocket calls 
> [createSocket|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/security/SSLFactory.java#L98],
>  which conveniently calls connect with a default timeout of "forever". We 
> could hang here forever until the OS kills us.
>  # If we continue, we get to 
> [writeConnected|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L263]
>  which eventually calls 
> 

[jira] [Commented] (CASSANDRA-14358) OutboundTcpConnection can hang for many minutes when nodes restart

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433171#comment-16433171
 ] 

Ariel Weisberg commented on CASSANDRA-14358:


30 seconds and a hot prop sounds excellent.

> OutboundTcpConnection can hang for many minutes when nodes restart
> --
>
> Key: CASSANDRA-14358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.19 (also reproduced on 3.0.15), running 
> with {{internode_encryption: all}} and the EC2 multi region snitch on Linux 
> 4.13 within the same AWS region. Smallest cluster I've seen the problem on is 
> 12 nodes, reproduces more reliably on 40+ and 300 node clusters consistently 
> reproduce on at least one node in the cluster.
> So all the connections are SSL and we're connecting on the internal ip 
> addresses (not the public endpoint ones).
> Potentially relevant sysctls:
> {noformat}
> /proc/sys/net/ipv4/tcp_syn_retries = 2
> /proc/sys/net/ipv4/tcp_synack_retries = 5
> /proc/sys/net/ipv4/tcp_keepalive_time = 7200
> /proc/sys/net/ipv4/tcp_keepalive_probes = 9
> /proc/sys/net/ipv4/tcp_keepalive_intvl = 75
> /proc/sys/net/ipv4/tcp_retries2 = 15
> {noformat}
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
> Attachments: 10 Minute Partition.pdf
>
>
> I've been trying to debug nodes not being able to see each other during 
> longer (~5 minute+) Cassandra restarts in 3.0.x and 2.1.x which can 
> contribute to {{UnavailableExceptions}} during rolling restarts of 3.0.x and 
> 2.1.x clusters for us. I think I finally have a lead. It appears that prior 
> to trunk (with the awesome Netty refactor) we do not set socket connect 
> timeouts on SSL connections (in 2.1.x, 3.0.x, or 3.11.x) nor do we set 
> {{SO_TIMEOUT}} as far as I can tell on outbound connections either. I believe 
> that this means that we could potentially block forever on {{connect}} or 
> {{recv}} syscalls, and we could block forever on the SSL Handshake as well. I 
> think that the OS will protect us somewhat (and that may be what's causing 
> the eventual timeout) but I think that given the right network conditions our 
> {{OutboundTCPConnection}} threads can just be stuck never making any progress 
> until the OS intervenes.
> I have attached some logs of such a network partition during a rolling 
> restart where an old node in the cluster has a completely foobarred 
> {{OutboundTcpConnection}} for ~10 minutes before finally getting a 
> {{java.net.SocketException: Connection timed out (Write failed)}} and 
> immediately successfully reconnecting. I conclude that the old node is the 
> problem because the new node (the one that restarted) is sending ECHOs to the 
> old node, and the old node is sending ECHOs and REQUEST_RESPONSES to the new 
> node's ECHOs, but the new node is never getting the ECHO's. This appears, to 
> me, to indicate that the old node's {{OutboundTcpConnection}} thread is just 
> stuck and can't make any forward progress. By the time we could notice this 
> and slap TRACE logging on, the only thing we see is ~10 minutes later a 
> {{SocketException}} inside {{writeConnected}}'s flush and an immediate 
> recovery. It is interesting to me that the exception happens in 
> {{writeConnected}} and it's a _connection timeout_ (and since we see {{Write 
> failure}} I believe that this can't be a connection reset), because my 
> understanding is that we should have a fully handshaked SSL connection at 
> that point in the code.
> Current theory:
>  # "New" node restarts,  "Old" node calls 
> [newSocket|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L433]
>  # Old node starts [creating a 
> new|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L141]
>  SSL socket 
>  # SSLSocket calls 
> [createSocket|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/security/SSLFactory.java#L98],
>  which conveniently calls connect with a default timeout of "forever". We 
> could hang here forever until the OS kills us.
>  # If we continue, we get to 
> [writeConnected|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L263]
>  which eventually calls 
> [flush|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L341]
>  on the output stream and also can hang forever. I think the probability is 
> 

[jira] [Created] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)
Chris mildebrandt created CASSANDRA-14376:
-

 Summary: Limiting a clustering column with a range not allowed 
when using "group by"
 Key: CASSANDRA-14376
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Cassandra 3.11.1
Reporter: Chris mildebrandt


I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{{color:#FF}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{+---+---}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: renamed ColumnFamilyStoreCQLHelper to TableCQLHelper

2018-04-10 Thread rustyrazorblade
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4991ca26a -> e75c51719


renamed ColumnFamilyStoreCQLHelper to TableCQLHelper

Patch by Venkata+Harikrishna, reviewed by Jon Haddad for CASSANDRA-14354


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e75c5171
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e75c5171
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e75c5171

Branch: refs/heads/trunk
Commit: e75c5171964b3211776136c50f0d8514b85d6295
Parents: 4991ca2
Author: Venkata+Harikrishna Nukala 
Authored: Sat Mar 31 04:16:27 2018 +0530
Committer: Jon Haddad 
Committed: Tue Apr 10 16:29:14 2018 -0700

--
 CHANGES.txt |   2 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   2 +-
 .../db/ColumnFamilyStoreCQLHelper.java  | 428 --
 .../org/apache/cassandra/db/TableCQLHelper.java | 428 ++
 .../db/ColumnFamilyStoreCQLHelperTest.java  | 447 ---
 .../apache/cassandra/db/TableCQLHelperTest.java | 447 +++
 6 files changed, 878 insertions(+), 876 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 650f740..bb8c731 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 4.0
+ * Rename internals to reflect CQL vocabulary
+   (CASSANDRA-14354)
  * Add support for hybrid MIN(), MAX() speculative retry policies
(CASSANDRA-14293, CASSANDRA-14338, CASSANDRA-14352)
  * Fix some regressions caused by 14058 (CASSANDRA-14353)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index e4b84fe..34535e5 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1824,7 +1824,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 try (PrintStream out = new PrintStream(schemaFile))
 {
-for (String s: 
ColumnFamilyStoreCQLHelper.dumpReCreateStatements(metadata()))
+for (String s: 
TableCQLHelper.dumpReCreateStatements(metadata()))
 out.println(s);
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
deleted file mode 100644
index 740ef3f..000
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
+++ /dev/null
@@ -1,428 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.cassandra.db;
-
-import java.nio.ByteBuffer;
-import java.util.*;
-import java.util.concurrent.atomic.*;
-import java.util.function.*;
-
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.Iterables;
-
-import org.apache.cassandra.cql3.statements.*;
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.schema.*;
-import org.apache.cassandra.utils.*;
-
-/**
- * Helper methods to represent TableMetadata and related objects in CQL format
- */
-public class ColumnFamilyStoreCQLHelper
-{
-public static List dumpReCreateStatements(TableMetadata metadata)
-{
-List l = new ArrayList<>();
-// Types come first, as table can't be created without them
-l.addAll(ColumnFamilyStoreCQLHelper.getUserTypesAsCQL(metadata));
-// Record re-create schema statements
-

[jira] [Updated] (CASSANDRA-14354) rename ColumnFamilyStoreCQLHelper to TableCQLHelper

2018-04-10 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-14354:
---
   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

Committed to trunk as e75c517196.

> rename ColumnFamilyStoreCQLHelper to TableCQLHelper
> ---
>
> Key: CASSANDRA-14354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14354
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jon Haddad
>Assignee: Venkata Harikrishna Nukala
>Priority: Major
> Fix For: 4.0
>
> Attachments: 14354-trunk.txt
>
>
> Seems like a simple 1:1 rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13910) Remove read_repair_chance/dclocal_read_repair_chance

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433077#comment-16433077
 ] 

Aleksey Yeschenko commented on CASSANDRA-13910:
---

bq. For the ReadRepair interface, since there’s no such thing as a background 
read repair anymore, we should rename startForegroundRepair and 
awaitForegroundRepairFinish to something like startRepair and awaitRepair.

I had that thought, but wanted to keep the patch to pure surgical removal. I 
wanted wrong, though, and you are right - we might as well rename it here, to 
minimise future confusion. Will do.

bq. What do you think about removing the related code from the schema, ddl, and 
table metadata code as well? The only benefit I see to keeping it around is 
that tooling that reads or writes these values won’t break. Since we’re 
removing the feature, and it’s going in a x.0 release that will require some 
review of tooling, it would probably be more informative to throw an exception, 
instead of printing a warning that may or may not be noticed.

I don't know. A part of me thinks that it's a bit rude to go from full support 
to throwing exceptions in one jump. I would prefer to warn and ignore in 4.0, 
but clean up the rest in some future release. Turning on [~slebresne] signal.



> Remove read_repair_chance/dclocal_read_repair_chance
> 
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.0
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


cassandra-dtest git commit: Mark PK as NOT NULL explicitly for MV in pushed_notifications_test.py

2018-04-10 Thread aleksey
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 0e6a1e6ed -> 4f2996b46


Mark PK as NOT NULL explicitly for MV in pushed_notifications_test.py


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/4f2996b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/4f2996b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/4f2996b4

Branch: refs/heads/master
Commit: 4f2996b46a07cb371cae1180638e1b8d7039bf50
Parents: 0e6a1e6
Author: Aleksey Yeschenko 
Authored: Tue Apr 10 23:38:32 2018 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Apr 10 23:38:32 2018 +0100

--
 pushed_notifications_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/4f2996b4/pushed_notifications_test.py
--
diff --git a/pushed_notifications_test.py b/pushed_notifications_test.py
index 9b888de..3447fdc 100644
--- a/pushed_notifications_test.py
+++ b/pushed_notifications_test.py
@@ -301,7 +301,7 @@ class TestPushedNotifications(Tester):
 session.execute("create TABLE t (k int PRIMARY KEY , v int)")
 session.execute("alter TABLE t add v1 int;")
 
-session.execute("create MATERIALIZED VIEW mv as select * from t WHERE 
v IS NOT NULL AND v1 IS NOT NULL PRIMARY KEY (v, k)")
+session.execute("create MATERIALIZED VIEW mv as select * from t WHERE 
v IS NOT NULL AND k IS NOT NULL PRIMARY KEY (v, k)")
 session.execute(" alter materialized view mv with min_index_interval = 
100")
 
 session.execute("drop MATERIALIZED VIEW mv")


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14375) Digest mismatch Exception when sending raw hints in cluster

2018-04-10 Thread Vineet Ghatge (JIRA)
Vineet Ghatge created CASSANDRA-14375:
-

 Summary: Digest mismatch Exception when sending raw hints in 
cluster
 Key: CASSANDRA-14375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14375
 Project: Cassandra
  Issue Type: Bug
  Components: Hints
 Environment: CentOS 7.3
Reporter: Vineet Ghatge


We have 14 nodes cluster where we seen hints file getting corrupted and 
resulting in the following error

[04/06/18 12:21 PM] Kotkar, Shantanu: ERROR [HintsDispatcher:1] 2018-04-06 
16:26:44,423 CassandraDaemon.java:228 - Exception in thread 
Thread[HintsDispatcher:1,1,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Digest mismatch 
exception
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:298)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:263)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:169) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:128)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:113) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:94) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:278)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:260)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:238)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:217)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_141]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_141]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_141]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_141]
 at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 [apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]
Caused by: java.io.IOException: Digest mismatch exception
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:315)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:289)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 ... 16 common frames omitted

Notes on cluster and investigation done so far
1. Cassandra used here is built locally from 3.11.1 branch along with following 
patch from issue: CASSANDRA-14080
 
https://github.com/apache/cassandra/commit/68079e4b2ed4e58dbede70af45414b3d4214e195
2. The bootstrap of 14 nodes happens in the following way:
 - Out of 14 nodes only 3 nodes are picked as seed nodes.
 - Only 1 out 3 seed nodes is started and schema is created if it was not 
created previously.
 - Post this, rest of nodes are bootstrapped.
 - In failure scenario, only 5 out of 14 succesfully formed the cassandra 
cluster. The failed nodes include two seed nodes.
3. We confirmed the following patch from issue: CASSANDRA-13696 has been 
applied. From confirmed from Jay Zhuang that this is different issue from what 
was previously fixed.
"this should be a different issue, as HintsDispatcher.java:128 sends hints with 
\{{buffer}}s, this patch is only to fix the digest mismatch for 
HintsDispatcher.java:129, which sends hints one by one."
4. Application uses java driver with quoram setting for cassandra
5. We saw this issue on 7 node cluster too (different from 14 node cluster)
6. We are able to workaround by running nodetool truncatehints on failed nodes 
and restarting cassandra.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14375) Digest mismatch Exception when sending raw hints in cluster

2018-04-10 Thread Vineet Ghatge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433114#comment-16433114
 ] 

Vineet Ghatge commented on CASSANDRA-14375:
---

I am trying to reproducing the issue using ccm. I will update this once I have 
something working

> Digest mismatch Exception when sending raw hints in cluster
> ---
>
> Key: CASSANDRA-14375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14375
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
> Environment: CentOS 7.3
>Reporter: Vineet Ghatge
>Priority: Major
>
> We have 14 nodes cluster where we seen hints file getting corrupted and 
> resulting in the following error
> [04/06/18 12:21 PM] Kotkar, Shantanu: ERROR [HintsDispatcher:1] 2018-04-06 
> 16:26:44,423 CassandraDaemon.java:228 - Exception in thread 
> Thread[HintsDispatcher:1,1,main]
> org.apache.cassandra.io.FSReadError: java.io.IOException: Digest mismatch 
> exception
>  at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:298)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:263)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:169)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:128)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:113) 
> ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:94) 
> ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:278)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:260)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:238)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:217)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_141]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_141]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_141]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_141]
>  at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]
> Caused by: java.io.IOException: Digest mismatch exception
>  at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:315)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  at 
> org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:289)
>  ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
>  ... 16 common frames omitted
> Notes on cluster and investigation done so far
> 1. Cassandra used here is built locally from 3.11.1 branch along with 
> following patch from issue: CASSANDRA-14080
>  
> https://github.com/apache/cassandra/commit/68079e4b2ed4e58dbede70af45414b3d4214e195
> 2. The bootstrap of 14 nodes happens in the following way:
>  - Out of 14 nodes only 3 nodes are picked as seed nodes.
>  - Only 1 out 3 seed nodes is started and schema is created if it was not 
> created previously.
>  - Post this, rest of nodes are bootstrapped.
>  - In failure scenario, only 5 out of 14 succesfully formed the cassandra 
> cluster. The failed nodes include two seed nodes.
> 3. We confirmed the following patch from issue: CASSANDRA-13696 has been 
> applied. From confirmed from Jay Zhuang that this is different issue from 
> what was previously fixed.
> "this should be a different issue, as HintsDispatcher.java:128 sends hints 
> with \{{buffer}}s, this patch is only to fix the digest mismatch for 
> HintsDispatcher.java:129, which sends hints one by one."
> 4. Application uses java driver with quoram setting for cassandra
> 5. We saw this issue on 7 node cluster too (different 

[jira] [Commented] (CASSANDRA-13910) Remove read_repair_chance/dclocal_read_repair_chance

2018-04-10 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433066#comment-16433066
 ] 

Blake Eggleston commented on CASSANDRA-13910:
-

Looking at the read path, it looks like this removes things in all the right 
places.

For the {{ReadRepair}} interface, since there’s no such thing as a background 
read repair anymore, we should rename {{startForegroundRepair}} and 
{{awaitForegroundRepairFinish}} to something like {{startRepair}} and 
{{awaitRepair}}.

What do you think about removing the related code from the schema, ddl, and 
table metadata code as well? The only benefit I see to keeping it around is 
that tooling that reads or writes these values won’t break. Since we’re 
removing the feature, and it’s going in a x.0 release that will require some 
review of tooling, it would probably be more informative to throw an exception, 
instead of printing a warning that may or may not be noticed. 


> Remove read_repair_chance/dclocal_read_repair_chance
> 
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.0
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13910) Remove read_repair_chance/dclocal_read_repair_chance

2018-04-10 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433085#comment-16433085
 ] 

Blake Eggleston commented on CASSANDRA-13910:
-

I agree it's a little rude, but we may as well discuss while we're here.

> Remove read_repair_chance/dclocal_read_repair_chance
> 
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.0
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris mildebrandt updated CASSANDRA-14376:
--
Description: 
I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
 {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{++--}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}

  was:
I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{{color:#FF}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{+---+---}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}


> Limiting a clustering column with a range not allowed when using "group by"
> ---
>
> Key: CASSANDRA-14376
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.1
>Reporter: Chris mildebrandt
>Priority: Major
>
> I’m trying to use a range to limit a clustering column while at the same time 
> using `group by` and running into issues. Here’s a sample table:
> {{create table if not exists samples (name text, partition int, sample int, 
> city text, state text, count counter, primary key ((name, partition), sample, 
> city, state)) with clustering order by (sample desc);}}
> When I filter `sample` by a range, I get an error:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample>=1 and sample<=3 group by city, state;}}
>  {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="Group by currently only support groups of columns following 
> their declared order in the PRIMARY KEY"{color}}}
> However, it allows the query when I change from a range to an equals:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample=1 group by city, state;}}
> {{city | state | system.sum(count)}}
> {{++--}}
> {{ Austin | TX | 2}}
> {{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433183#comment-16433183
 ] 

Chris mildebrandt commented on CASSANDRA-14376:
---

Sample for reproduction:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=3 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=3 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

> Limiting a clustering column with a range not allowed when using "group by"
> ---
>
> Key: CASSANDRA-14376
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.1
>Reporter: Chris mildebrandt
>Priority: Major
>
> I’m trying to use a range to limit a clustering column while at the same time 
> using `group by` and running into issues. Here’s a sample table:
> {{create table if not exists samples (name text, partition int, sample int, 
> city text, state text, count counter, primary key ((name, partition), sample, 
> city, state)) with clustering order by (sample desc);}}
> When I filter `sample` by a range, I get an error:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample>=1 and sample<=3 group by city, state;}}
>  {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="Group by currently only support groups of columns following 
> their declared order in the PRIMARY KEY"{color}}}
> However, it allows the query when I change from a range to an equals:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample=1 group by city, state;}}
> {{city | state | system.sum(count)}}
> {{++--}}
> {{ Austin | TX | 2}}
> {{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432918#comment-16432918
 ] 

Paulo Motta commented on CASSANDRA-14374:
-

{quote}Because the previous patch introduced a minor annoying regression, in 
that 99p for example is being serialized as 99.00p (instead of 99p). And check 
it in pt_BR locale as well as en_US?
{quote}
Even if we use a {{DateFormatter}} we still need to specify the US locale to 
ensure the dot decimal separator (and not comma) is used, so I changed the 
toString to:
{code:none}
return String.format("%sp", new DecimalFormat("#.", new 
DecimalFormatSymbols(Locale.ENGLISH)).format(percentile));
{code}
I also updated the {{SpeculativeRetryParseTest}} to test round trip parsing 
with the Brazilian locale which uses comma as the decimal separator. Patch is 
attached.

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14372) data_file_directories config - update documentation in cassandra.yaml

2018-04-10 Thread Venkata Harikrishna Nukala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Harikrishna Nukala updated CASSANDRA-14372:
---
Status: Patch Available  (was: Open)

> data_file_directories config - update documentation in cassandra.yaml
> -
>
> Key: CASSANDRA-14372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Minor
> Attachments: 14372-trunk.txt
>
>
> If "data_file_directories" configuration is enabled with multiple 
> directories, data is partitioned by token range so that data gets distributed 
> evenly. But the current documentation says that "Cassandra will spread data 
> evenly across them, subject to the granularity of the configured compaction 
> strategy". Need to update this comment to reflect the correct behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432941#comment-16432941
 ] 

Paulo Motta commented on CASSANDRA-14374:
-

Oh you guys were faster with CASSANDRA-14352. :) Attached 
0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch rebasing on top 
of that.

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433018#comment-16433018
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


[~aweisberg] Both patches are uploaded. I retained only one dtest for 3 
datacenters in nodetool_test.py.

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v6.patch, jira_13853_dtest_v2.patch
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preetika Tyagi updated CASSANDRA-13853:
---
Attachment: jira_13853_dtest_v2.patch

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v6.patch, jira_13853_dtest_v2.patch
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14303) NetworkTopologyStrategy could have a "default replication" option

2018-04-10 Thread Joseph Lynch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433023#comment-16433023
 ] 

Joseph Lynch commented on CASSANDRA-14303:
--

[~snazy]

I think that due to the way that I've implemented it (as an auto-expansion at 
{{CREATE}}/{{ALTER}} time which never changes existing definitions), we 
shouldn't be vulnerable to any such corner cases. We may have to start 
considering them if we ran something like a scheduled ALTER against the 
{{system_distributed}} table to change it to NTS instead of SS, but I think 
that's a different discussion.

> NetworkTopologyStrategy could have a "default replication" option
> -
>
> Key: CASSANDRA-14303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14303
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
>
> Right now when creating a keyspace with {{NetworkTopologyStrategy}} the user 
> has to manually specify the datacenters they want their data replicated to 
> with parameters, e.g.:
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 3, 'dc2': 3}{noformat}
> This is a poor user interface because it requires the creator of the keyspace 
> (typically a developer) to know the layout of the Cassandra cluster (which 
> may or may not be controlled by them). Also, at least in my experience, folks 
> typo the datacenters _all_ the time. To work around this I see a number of 
> users creating automation around this where the automation describes the 
> Cassandra cluster and automatically expands out to all the dcs that Cassandra 
> knows about. Why can't Cassandra just do this for us, re-using the previously 
> forbidden {{replication_factor}} option (for backwards compatibility):
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> This would automatically replicate this Keyspace to all datacenters that are 
> present in the cluster. If you need to _override_ the default you could 
> supply a datacenter name, e.g.:
> {noformat}
> > CREATE KEYSPACE test WITH replication = {'class': 
> > 'NetworkTopologyStrategy', 'replication_factor': 3, 'dc1': 2}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '2', 'dc2': 3} AND durable_writes = true;
> {noformat}
> On the implementation side I think this may be reasonably straightforward to 
> do an auto-expansion at the time of keyspace creation (or alter), where the 
> above would automatically expand to list out the datacenters. We could allow 
> this to be recomputed whenever an AlterKeyspaceStatement runs so that to add 
> datacenters you would just run:
> {noformat}
> ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> and this would check that if the dc's in the current schema are different you 
> add in the new ones (_for safety reasons we'd never remove non explicitly 
> supplied zero dcs when auto-generating dcs_). Removing a datacenter becomes 
> an alter that includes an override for the dc you want to remove (or of 
> course you can always not use the auto-expansion and just use the old way):
> {noformat}
> // Tell it explicitly not to replicate to dc2
> > ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> > 'replication_factor': 3, 'dc2': 0}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '3'} AND durable_writes = true;{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Fix schema_metadata_test on trunk

2018-04-10 Thread aleksey
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 3a4b5d98e -> 1df74a6af


Fix schema_metadata_test on trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/1df74a6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/1df74a6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/1df74a6a

Branch: refs/heads/master
Commit: 1df74a6afe80d26192a9310e349885114bde181d
Parents: 3a4b5d9
Author: Aleksey Yeschenko 
Authored: Mon Apr 9 13:56:37 2018 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Apr 10 21:28:43 2018 +0100

--
 schema_metadata_test.py | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/1df74a6a/schema_metadata_test.py
--
diff --git a/schema_metadata_test.py b/schema_metadata_test.py
index fdfcf56..d8d727b 100644
--- a/schema_metadata_test.py
+++ b/schema_metadata_test.py
@@ -227,9 +227,14 @@ def verify_nondefault_table_settings(created_on_version, 
current_version, keyspa
 assert 20 == meta.options['max_index_interval']
 
 if created_on_version >= '3.0':
-assert '55PERCENTILE' == meta.options['speculative_retry']
 assert 2121 == meta.options['memtable_flush_period_in_ms']
 
+if created_on_version >= '3.0':
+if created_on_version >= '4.0':
+assert '55p' == meta.options['speculative_retry']
+else:
+assert '55PERCENTILE' == meta.options['speculative_retry']
+
 if current_version >= '3.0':
 assert 'org.apache.cassandra.io.compress.DeflateCompressor' == 
meta.options['compression']['class']
 assert '128' == meta.options['compression']['chunk_length_in_kb']


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14358) OutboundTcpConnection can hang for many minutes when nodes restart

2018-04-10 Thread Joseph Lynch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432971#comment-16432971
 ] 

Joseph Lynch commented on CASSANDRA-14358:
--

Yea, I am interested as to what's the right default for this setting. For 
normal TCP connections I think it would be reasonable to put this very low 
(like close to the TCP connect timeout of 2s we use right now), but for SSL ... 
losing those SSL handshakes is somewhat of a bummer if it's just a temporary 
switch failure + OSPF convergence or something. What do you think about being 
conservative (maybe like 30s) for SSL and we'll make it a hot property in 
addition to the yaml configuration?

I've been testing this option on Linux 4.4 and 4.13 with [a minimal 
repro|https://gist.github.com/jolynch/90033c2b10ab8280859c8cfe352503cd] and it 
appears that the option works well if set to a small number (e.g. 5, 10, 20s), 
but it seems to take about 2x as long for large settings and (I need to do 
further testing) on Linux 4.4 any setting greater than 30s appears to just 
default to the system behavior (on 4.13 it is 2x the timeout, but not the 
system default). So if we set it to 30s we'd get a 60s timeout in most modern 
Linux's, and if it's not effective we'll just get the system default.

> OutboundTcpConnection can hang for many minutes when nodes restart
> --
>
> Key: CASSANDRA-14358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.19 (also reproduced on 3.0.15), running 
> with {{internode_encryption: all}} and the EC2 multi region snitch on Linux 
> 4.13 within the same AWS region. Smallest cluster I've seen the problem on is 
> 12 nodes, reproduces more reliably on 40+ and 300 node clusters consistently 
> reproduce on at least one node in the cluster.
> So all the connections are SSL and we're connecting on the internal ip 
> addresses (not the public endpoint ones).
> Potentially relevant sysctls:
> {noformat}
> /proc/sys/net/ipv4/tcp_syn_retries = 2
> /proc/sys/net/ipv4/tcp_synack_retries = 5
> /proc/sys/net/ipv4/tcp_keepalive_time = 7200
> /proc/sys/net/ipv4/tcp_keepalive_probes = 9
> /proc/sys/net/ipv4/tcp_keepalive_intvl = 75
> /proc/sys/net/ipv4/tcp_retries2 = 15
> {noformat}
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
> Attachments: 10 Minute Partition.pdf
>
>
> I've been trying to debug nodes not being able to see each other during 
> longer (~5 minute+) Cassandra restarts in 3.0.x and 2.1.x which can 
> contribute to {{UnavailableExceptions}} during rolling restarts of 3.0.x and 
> 2.1.x clusters for us. I think I finally have a lead. It appears that prior 
> to trunk (with the awesome Netty refactor) we do not set socket connect 
> timeouts on SSL connections (in 2.1.x, 3.0.x, or 3.11.x) nor do we set 
> {{SO_TIMEOUT}} as far as I can tell on outbound connections either. I believe 
> that this means that we could potentially block forever on {{connect}} or 
> {{recv}} syscalls, and we could block forever on the SSL Handshake as well. I 
> think that the OS will protect us somewhat (and that may be what's causing 
> the eventual timeout) but I think that given the right network conditions our 
> {{OutboundTCPConnection}} threads can just be stuck never making any progress 
> until the OS intervenes.
> I have attached some logs of such a network partition during a rolling 
> restart where an old node in the cluster has a completely foobarred 
> {{OutboundTcpConnection}} for ~10 minutes before finally getting a 
> {{java.net.SocketException: Connection timed out (Write failed)}} and 
> immediately successfully reconnecting. I conclude that the old node is the 
> problem because the new node (the one that restarted) is sending ECHOs to the 
> old node, and the old node is sending ECHOs and REQUEST_RESPONSES to the new 
> node's ECHOs, but the new node is never getting the ECHO's. This appears, to 
> me, to indicate that the old node's {{OutboundTcpConnection}} thread is just 
> stuck and can't make any forward progress. By the time we could notice this 
> and slap TRACE logging on, the only thing we see is ~10 minutes later a 
> {{SocketException}} inside {{writeConnected}}'s flush and an immediate 
> recovery. It is interesting to me that the exception happens in 
> {{writeConnected}} and it's a _connection timeout_ (and since we see {{Write 
> failure}} I believe that this can't be a connection reset), because my 
> understanding is that we should have a fully handshaked SSL connection at 
> that point in the code.
> Current theory:
>  # "New" node restarts,  "Old" node calls 
> 

[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Clean up parsing speculative retry params from string

2018-04-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk b5dbc04bd -> 4991ca26a


Clean up parsing speculative retry params from string

patch by Aleksey Yeschenko; reviewed by Blake Eggleston for
CASSANDRA-14352


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4991ca26
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4991ca26
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4991ca26

Branch: refs/heads/trunk
Commit: 4991ca26aa424286ebdee89742d35e813f9e9259
Parents: b5dbc04
Author: Aleksey Yeshchenko 
Authored: Thu Mar 29 15:37:01 2018 +0100
Committer: Aleksey Yeshchenko 
Committed: Tue Apr 10 21:25:16 2018 +0100

--
 CHANGES.txt |   4 +-
 .../apache/cassandra/schema/TableParams.java|   4 +-
 .../reads/AlwaysSpeculativeRetryPolicy.java |   6 +-
 .../reads/FixedSpeculativeRetryPolicy.java  |  33 -
 .../reads/HybridSpeculativeRetryPolicy.java |  70 +--
 .../reads/NeverSpeculativeRetryPolicy.java  |   6 +-
 .../reads/PercentileSpeculativeRetryPolicy.java |  45 ++-
 .../service/reads/SpeculativeRetryPolicy.java   | 122 +++
 8 files changed, 162 insertions(+), 128 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c123e6f..650f740 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,12 +1,12 @@
 4.0
+ * Add support for hybrid MIN(), MAX() speculative retry policies
+   (CASSANDRA-14293, CASSANDRA-14338, CASSANDRA-14352)
  * Fix some regressions caused by 14058 (CASSANDRA-14353)
  * Abstract repair for pluggable storage (CASSANDRA-14116)
  * Add meaningful toString() impls (CASSANDRA-13653)
  * Add sstableloader option to accept target keyspace name (CASSANDRA-13884)
  * Move processing of EchoMessage response to gossip stage (CASSANDRA-13713)
  * Add coordinator write metric per CF (CASSANDRA-14232)
- * Fix scheduling of speculative retry threshold recalculation 
(CASSANDRA-14338)
- * Add support for hybrid MIN(), MAX() speculative retry policies 
(CASSANDRA-14293)
  * Correct and clarify SSLFactory.getSslContext method and call sites 
(CASSANDRA-14314)
  * Handle static and partition deletion properly on 
ThrottledUnfilteredIterator (CASSANDRA-14315)
  * NodeTool clientstats should show SSL Cipher (CASSANDRA-14322)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/src/java/org/apache/cassandra/schema/TableParams.java
--
diff --git a/src/java/org/apache/cassandra/schema/TableParams.java 
b/src/java/org/apache/cassandra/schema/TableParams.java
index ffa310e..895e3a7 100644
--- a/src/java/org/apache/cassandra/schema/TableParams.java
+++ b/src/java/org/apache/cassandra/schema/TableParams.java
@@ -26,6 +26,7 @@ import com.google.common.collect.ImmutableMap;
 
 import org.apache.cassandra.cql3.Attributes;
 import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.service.reads.PercentileSpeculativeRetryPolicy;
 import org.apache.cassandra.service.reads.SpeculativeRetryPolicy;
 import org.apache.cassandra.utils.BloomCalculations;
 
@@ -70,6 +71,7 @@ public final class TableParams
 public static final int DEFAULT_MIN_INDEX_INTERVAL = 128;
 public static final int DEFAULT_MAX_INDEX_INTERVAL = 2048;
 public static final double DEFAULT_CRC_CHECK_CHANCE = 1.0;
+public static final SpeculativeRetryPolicy DEFAULT_SPECULATIVE_RETRY = new 
PercentileSpeculativeRetryPolicy(99.0);
 
 public final String comment;
 public final double readRepairChance;
@@ -290,7 +292,7 @@ public final class TableParams
 private int memtableFlushPeriodInMs = 
DEFAULT_MEMTABLE_FLUSH_PERIOD_IN_MS;
 private int minIndexInterval = DEFAULT_MIN_INDEX_INTERVAL;
 private int maxIndexInterval = DEFAULT_MAX_INDEX_INTERVAL;
-private SpeculativeRetryPolicy speculativeRetry = 
SpeculativeRetryPolicy.DEFAULT;
+private SpeculativeRetryPolicy speculativeRetry = 
DEFAULT_SPECULATIVE_RETRY;
 private CachingParams caching = CachingParams.DEFAULT;
 private CompactionParams compaction = CompactionParams.DEFAULT;
 private CompressionParams compression = CompressionParams.DEFAULT;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java 
b/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java

[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: (was: 
0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch)

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: (was: 
0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch)

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432915#comment-16432915
 ] 

Aleksey Yeschenko commented on CASSANDRA-14352:
---

Thanks, committed to trunk as 
[4991ca26aa424286ebdee89742d35e813f9e9259|https://github.com/apache/cassandra/commit/4991ca26aa424286ebdee89742d35e813f9e9259]
 and fixed the broken dtest 
[here|https://github.com/apache/cassandra-dtest/commit/1df74a6afe80d26192a9310e349885114bde181d].

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.x
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14372) data_file_directories config - update documentation in cassandra.yaml

2018-04-10 Thread Venkata Harikrishna Nukala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Harikrishna Nukala updated CASSANDRA-14372:
---
Attachment: 14372-trunk.txt

> data_file_directories config - update documentation in cassandra.yaml
> -
>
> Key: CASSANDRA-14372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Minor
> Attachments: 14372-trunk.txt
>
>
> If "data_file_directories" configuration is enabled with multiple 
> directories, data is partitioned by token range so that data gets distributed 
> evenly. But the current documentation says that "Cassandra will spread data 
> evenly across them, subject to the granularity of the configured compaction 
> strategy". Need to update this comment to reflect the correct behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2018-04-10 Thread Joseph Lynch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432984#comment-16432984
 ] 

Joseph Lynch commented on CASSANDRA-7839:
-

Ah, nice catch. The two dtest failures don't look related and the unit tests 
pass, so lg2m.

> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Jason Brown
>Priority: Major
>  Labels: docs-impacting
> Attachments: CASSANDRA-7839-aws-naming-conventions.patch
>
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432897#comment-16432897
 ] 

Blake Eggleston commented on CASSANDRA-14352:
-

+1

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.x
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14352:

Status: Ready to Commit  (was: Patch Available)

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.x
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14372) data_file_directories config - update documentation in cassandra.yaml

2018-04-10 Thread Venkata Harikrishna Nukala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432917#comment-16432917
 ] 

Venkata Harikrishna Nukala commented on CASSANDRA-14372:


Attached patch with the changes. Please review it.

> data_file_directories config - update documentation in cassandra.yaml
> -
>
> Key: CASSANDRA-14372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Minor
> Attachments: 14372-trunk.txt
>
>
> If "data_file_directories" configuration is enabled with multiple 
> directories, data is partitioned by token range so that data gets distributed 
> evenly. But the current documentation says that "Cassandra will spread data 
> evenly across them, subject to the granularity of the configured compaction 
> strategy". Need to update this comment to reflect the correct behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14352:
--
   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.0
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432814#comment-16432814
 ] 

Ariel Weisberg edited comment on CASSANDRA-13853 at 4/10/18 8:47 PM:
-

That looks good. Upload the patch and I will try out the dtest. 3 dtests is a 
bit much to add for this. They are very very slow and I don't want to add that 
many if I can avoid it. I think it should also go into the existing 
nodetool_test.py? It's not that big yet so I don't think we need to break up 
nodetool tests into multiple files.


Maybe just add the 3 datacenter case?


was (Author: aweisberg):
That looks good. Upload the patch and I will try out the dtest. 3 dtests is a 
bit much to add for this. They are very very slow and I don't want to add that 
many if I can avoid it. I think it should also go into the existing 
nodetool_test.py? It's no that big yet so I don't think we need to break up 
nodetool tests into multiple files.


Maybe just add the 3 datacenter case?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432950#comment-16432950
 ] 

Paulo Motta commented on CASSANDRA-14374:
-

Attached 0004-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch with 
typo fix, sorry for the spam.. (:

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0004-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0004-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0003-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0004-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preetika Tyagi updated CASSANDRA-13853:
---
Attachment: (was: nodetool_describecluster_test.py)

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v6.patch, jira_13853_dtest_v2.patch
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preetika Tyagi updated CASSANDRA-13853:
---
Attachment: (was: cassandra-13853-v5.patch)

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v6.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preetika Tyagi updated CASSANDRA-13853:
---
Attachment: cassandra-13853-v6.patch

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v6.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Fix materialized_views_test.py (slightly)

2018-04-10 Thread aleksey
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 1df74a6af -> 0e6a1e6ed


Fix materialized_views_test.py (slightly)


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/0e6a1e6e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/0e6a1e6e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/0e6a1e6e

Branch: refs/heads/master
Commit: 0e6a1e6ed353dc07f4ece01ee409b962c27d7a3e
Parents: 1df74a6
Author: Aleksey Yeschenko 
Authored: Tue Apr 10 22:55:53 2018 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Apr 10 22:55:53 2018 +0100

--
 materialized_views_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/0e6a1e6e/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index eaae8dd..f843dc8 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -828,7 +828,7 @@ class TestMaterializedViews(Tester):
 """
 session = self.prepare(user_table=True)
 
-session.execute(("CREATE MATERIALIZED VIEW users_by_state2 AS SELECT 
username FROM users "
+session.execute(("CREATE MATERIALIZED VIEW users_by_state2 AS SELECT 
state, username FROM users "
  "WHERE STATE IS NOT NULL AND USERNAME IS NOT NULL 
PRIMARY KEY (state, username)"))
 
 self._insert_data(session)
@@ -2822,7 +2822,7 @@ class TestMaterializedViewsLockcontention(Tester):
 FROM test
 WHERE int1 IS NOT NULL AND date IS NOT NULL AND int2 IS NOT NULL
 PRIMARY KEY (int1, date, int2)
-WITH CLUSTERING ORDER BY (date DESC, int1 DESC)""")
+WITH CLUSTERING ORDER BY (date DESC, int2 DESC)""")
 
 return session
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[11/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/cassandra-3.11
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)


[08/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-3.0
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - 

[12/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/cassandra-3.0
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)


[07/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-2.2
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - 

[10/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/trunk
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)


[13/15] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2018-04-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1020d62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1020d62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1020d62

Branch: refs/heads/cassandra-3.11
Commit: c1020d62ed05f7fa5735af6f09915cdc6850dbeb
Parents: b3e9908 73ca0e1
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:02:36 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:03:32 2018 +0200

--
 CHANGES.txt |  1 +
 .../io/util/CompressedChunkReader.java  | 65 +++-
 2 files changed, 38 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 0919c29,000..177afb0
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,229 -1,0 +1,238 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
++protected final boolean shouldCheckCrc()
++{
++return getCrcCheckChance() >= 1d || getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble();
++}
++
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + metadata.compressor().getClass().getSimpleName(),
 + metadata.chunkLength(),
 + metadata.dataLength);
 +}
 +
 +@Override
 +public int chunkSize()
 +{
 +return metadata.chunkLength();
 +}
 +
 +@Override
 +public BufferType preferredBufferType()
 +{
 +return metadata.compressor().preferredBufferType();
 +}
 +
 +@Override
 +public Rebufferer instantiateRebufferer()
 +{
 +return new BufferManagingRebufferer.Aligned(this);
 +}
 +
 +public static class Standard extends CompressedChunkReader
 +{
 +// we read the raw compressed bytes into this buffer, then 
uncompressed them into the provided one.
 +private final ThreadLocal compressedHolder;
 +
 +public Standard(ChannelProxy channel, CompressionMetadata metadata)
 +{
 +super(channel, metadata);
 +compressedHolder = ThreadLocal.withInitial(this::allocateBuffer);
 

[02/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-2.2
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + 

[01/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 19d26bcb8 -> 34a1d5da5
  refs/heads/cassandra-2.2 2e5e11d66 -> b3ac7937e
  refs/heads/cassandra-3.0 41f3b96f8 -> 73ca0e1e1
  refs/heads/cassandra-3.11 b3e99085a -> c1020d62e
  refs/heads/trunk b65b28a9e -> 0b16546f6


Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-2.1
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new 

[05/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-3.11
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + 

[04/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-3.0
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + 

[14/15] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2018-04-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1020d62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1020d62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1020d62

Branch: refs/heads/trunk
Commit: c1020d62ed05f7fa5735af6f09915cdc6850dbeb
Parents: b3e9908 73ca0e1
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:02:36 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:03:32 2018 +0200

--
 CHANGES.txt |  1 +
 .../io/util/CompressedChunkReader.java  | 65 +++-
 2 files changed, 38 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 0919c29,000..177afb0
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,229 -1,0 +1,238 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
++protected final boolean shouldCheckCrc()
++{
++return getCrcCheckChance() >= 1d || getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble();
++}
++
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + metadata.compressor().getClass().getSimpleName(),
 + metadata.chunkLength(),
 + metadata.dataLength);
 +}
 +
 +@Override
 +public int chunkSize()
 +{
 +return metadata.chunkLength();
 +}
 +
 +@Override
 +public BufferType preferredBufferType()
 +{
 +return metadata.compressor().preferredBufferType();
 +}
 +
 +@Override
 +public Rebufferer instantiateRebufferer()
 +{
 +return new BufferManagingRebufferer.Aligned(this);
 +}
 +
 +public static class Standard extends CompressedChunkReader
 +{
 +// we read the raw compressed bytes into this buffer, then 
uncompressed them into the provided one.
 +private final ThreadLocal compressedHolder;
 +
 +public Standard(ChannelProxy channel, CompressionMetadata metadata)
 +{
 +super(channel, metadata);
 +compressedHolder = ThreadLocal.withInitial(this::allocateBuffer);
 +

[15/15] cassandra git commit: Merge branch cassandra-3.11 into trunk

2018-04-10 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b16546f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b16546f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b16546f

Branch: refs/heads/trunk
Commit: 0b16546f6500f7c33db2f94957d6b5a8e0c108d1
Parents: b65b28a c1020d6
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:09:05 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:10:47 2018 +0200

--
 CHANGES.txt |  2 +
 .../io/util/CompressedChunkReader.java  | 83 
 2 files changed, 52 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b16546f/CHANGES.txt
--
diff --cc CHANGES.txt
index d191810,c4f05d5..e68518d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -236,6 -21,11 +236,8 @@@ Merged from 3.0
   * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 -Merged from 2.2:
 - * Backport circleci yaml (CASSANDRA-14240)
 -Merged from 2.1:
++ Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
 - * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
  
  3.11.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b16546f/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 5ae083b,177afb0..daec6c4
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -96,8 -94,7 +96,12 @@@ public abstract class CompressedChunkRe
  
  public ByteBuffer allocateBuffer()
  {
- return allocateBuffer(Math.min(maxCompressedLength,
-
metadata.compressor().initialCompressedBufferLength(metadata.chunkLength(;
 -return 
allocateBuffer(metadata.compressor().initialCompressedBufferLength(metadata.chunkLength()));
++int compressedLength = Math.min(maxCompressedLength,
++
metadata.compressor().initialCompressedBufferLength(metadata.chunkLength()));
++
++int checksumLength = Integer.BYTES;
++
++return allocateBuffer(compressedLength + checksumLength);
  }
  
  public ByteBuffer allocateBuffer(int size)
@@@ -115,35 -112,54 +119,63 @@@
  assert position <= fileLength;
  
  CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 -ByteBuffer compressed = compressedHolder.get();
 -
+ boolean shouldCheckCrc = shouldCheckCrc();
++int length = shouldCheckCrc ? chunk.length + Integer.BYTES // 
compressed length + checksum length
++: chunk.length;
+ 
 -int length = shouldCheckCrc ? chunk.length + Integer.BYTES : 
chunk.length;
 -
 -if (compressed.capacity() < length)
 +if (chunk.length < maxCompressedLength)
  {
 -compressed = allocateBuffer(length);
 -compressedHolder.set(compressed);
 -}
 -else
 -{
 -compressed.clear();
 -}
 +ByteBuffer compressed = compressedHolder.get();
- assert compressed.capacity() >= chunk.length;
- compressed.clear().limit(chunk.length);
- if (channel.read(compressed, chunk.offset) != 
chunk.length)
+ 
 -compressed.limit(length);
 -if (channel.read(compressed, chunk.offset) != length)
 -throw new CorruptBlockException(channel.filePath(), 
chunk);
 -
 -compressed.flip();
 -uncompressed.clear();
 -
 -compressed.position(0).limit(chunk.length);
++assert compressed.capacity() >= length;
++compressed.clear().limit(length);
++if (channel.read(compressed, chunk.offset) != length)
 +throw new CorruptBlockException(channel.filePath(), 
chunk);
  
 -if (shouldCheckCrc)
 +compressed.flip();
++compressed.limit(chunk.length);
 +   

[jira] [Created] (CASSANDRA-14373) Allow using custom script for chronicle queue BinLog archival

2018-04-10 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-14373:
--

 Summary: Allow using custom script for chronicle queue BinLog 
archival
 Key: CASSANDRA-14373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14373
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stefan Podkowinski
 Fix For: 4.x


It would be nice to allow the user to configure an archival script that will be 
executed in {{BinLog.onReleased(cycle, file)}} for every deleted bin log, just 
as we do in {{CommitLogArchiver}}. The script should be able to copy the 
released file to an external location or do whatever the author hand in mind. 
Deleting the log file should be delegated to the script as well.

See CASSANDRA-13983, CASSANDRA-12151 for use cases.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/trunk
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + 

[06/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/trunk
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - 

[09/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-3.11
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - 

[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-04-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431899#comment-16431899
 ] 

Stefan Podkowinski commented on CASSANDRA-12151:


The native transport integration for diagnostic events is basically just 
pushing events one by one over the control connection to the subscribed client. 
It's not really designed as a generic, fully scalable server-side data push 
solution. There are also no delivery guarantees, as it's really just supposed 
for debugging and analysis and not for implementing any control instances on 
top if it. The use case I have in mind is to have 1-2 clients subscribing to 
some kind of event, either ad-hoc or constantly running in the background. But 
I don't really see any use case for having a large fanout of e.g. compaction 
events. For that, the solution proposed in CASSANDRA-13459 should be 
sufficient. But we should probably discuss further details there, as it's 
slightly off-topic for this ticket.
{quote}I think specifying a shell script is probably OK although if someone 
specifies the script we should run it immediately once Chronicle rolls the 
file. Also if the script is specified we probably shouldn't delete artifacts.
{quote}
I've created a new ticket (CASSANDRA-14373) for this, as it's not strictly an 
auditing feature.

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13460) Diag. Events: Add local persistency

2018-04-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431911#comment-16431911
 ] 

Stefan Podkowinski commented on CASSANDRA-13460:


The proposed solution should be reconsidered using the chronicle queue based 
BinLog, instead of writing to a local keyspace. This should be a better 
solution for storing temporary, time based and sequentially retrieved events. 
We also get better portability by being able to simply copy already rolled over 
log files and read them on external systems. E.g. you could ask a user to 
enable diag event logging for compactions and have him send you an archive with 
all bin logs the next day, just by working with files.

> Diag. Events: Add local persistency
> ---
>
> Key: CASSANDRA-13460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13460
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> Some generated events will be rather less frequent but very useful for 
> retroactive troubleshooting. E.g. all events related to bootstraping and 
> gossip would probably be worth saving, as they might provide valuable 
> insights and will consume very little resources in low quantities. Imaging if 
> we could e.g. in case of CASSANDRA-13348 just ask the user to -run a tool 
> like {{./bin/diagdump BootstrapEvent}} on each host, to get us a detailed log 
> of all relevant events-  provide a dump of all events as described in the 
> [documentation|https://github.com/spodkowinski/cassandra/blob/WIP-13460/doc/source/operating/diag_events.rst].
>  
> This could be done by saving events white-listed in cassandra.yaml to a local 
> table. Maybe using a TTL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >