[jira] [Updated] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-06 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-14293:
-
Reviewer: Aleksey Yeschenko

> Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE 
> and FIXED Policies
> -
>
> Key: CASSANDRA-14293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Major
>
> Currently the Speculative Retry Policy takes a single string as a parameter, 
> this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).
> The problem we have is when a single host goes into a bad state this drags up 
> the percentiles. This means if we are set to use p99 alone, we might not 
> speculate when we intended to to because the value at the specified 
> percentile has gone so high.
> As a fix we need to have support for something like min(99percentile,50ms)
> this means if the normal p99 for the table is <50ms, we will still speculate 
> at this value and not drag the happy path tail latencies up... but if the 
> p99th goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-06 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389070#comment-16389070
 ] 

Michael Kjellman commented on CASSANDRA-14293:
--

trunk based branch with a commit for this: 
[https://github.com/mkjellman/cassandra/tree/37818459_trunk]

> Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE 
> and FIXED Policies
> -
>
> Key: CASSANDRA-14293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Major
>
> Currently the Speculative Retry Policy takes a single string as a parameter, 
> this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).
> The problem we have is when a single host goes into a bad state this drags up 
> the percentiles. This means if we are set to use p99 alone, we might not 
> speculate when we intended to to because the value at the specified 
> percentile has gone so high.
> As a fix we need to have support for something like min(99percentile,50ms)
> this means if the normal p99 for the table is <50ms, we will still speculate 
> at this value and not drag the happy path tail latencies up... but if the 
> p99th goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-06 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-14293:
-
Status: Patch Available  (was: Open)

> Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE 
> and FIXED Policies
> -
>
> Key: CASSANDRA-14293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Major
>
> Currently the Speculative Retry Policy takes a single string as a parameter, 
> this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).
> The problem we have is when a single host goes into a bad state this drags up 
> the percentiles. This means if we are set to use p99 alone, we might not 
> speculate when we intended to to because the value at the specified 
> percentile has gone so high.
> As a fix we need to have support for something like min(99percentile,50ms)
> this means if the normal p99 for the table is <50ms, we will still speculate 
> at this value and not drag the happy path tail latencies up... but if the 
> p99th goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-06 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389067#comment-16389067
 ] 

Michael Kjellman commented on CASSANDRA-14293:
--

Ideally we'd express this a bit nicer by having the speculative_retry table 
config option as a map, but we're bound by legacy here and have no way to 
migrate and change this with the current state of schema. So, ultimately we'll 
still need to do a bunch of string parsing but we can at least get the 
functionality we need with the following function-esque syntax (examples):

max(99.9p,50ms)

MIN(50MS,90PERCENTILE)

> Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE 
> and FIXED Policies
> -
>
> Key: CASSANDRA-14293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Major
>
> Currently the Speculative Retry Policy takes a single string as a parameter, 
> this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).
> The problem we have is when a single host goes into a bad state this drags up 
> the percentiles. This means if we are set to use p99 alone, we might not 
> speculate when we intended to to because the value at the specified 
> percentile has gone so high.
> As a fix we need to have support for something like min(99percentile,50ms)
> this means if the normal p99 for the table is <50ms, we will still speculate 
> at this value and not drag the happy path tail latencies up... but if the 
> p99th goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14293) Speculative Retry Policy Should Support Specifying MIN/MAX of 2 PERCENTILE and FIXED Policies

2018-03-06 Thread Michael Kjellman (JIRA)
Michael Kjellman created CASSANDRA-14293:


 Summary: Speculative Retry Policy Should Support Specifying 
MIN/MAX of 2 PERCENTILE and FIXED Policies
 Key: CASSANDRA-14293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14293
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michael Kjellman
Assignee: Michael Kjellman


Currently the Speculative Retry Policy takes a single string as a parameter, 
this can be NONE, ALWAYS, 99PERCENTILE (PERCENTILE), 50MS (CUSTOM).

The problem we have is when a single host goes into a bad state this drags up 
the percentiles. This means if we are set to use p99 alone, we might not 
speculate when we intended to to because the value at the specified percentile 
has gone so high.

As a fix we need to have support for something like min(99percentile,50ms)

this means if the normal p99 for the table is <50ms, we will still speculate at 
this value and not drag the happy path tail latencies up... but if the p99th 
goes above what we know we should never exceed we use that instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention

2018-03-06 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388935#comment-16388935
 ] 

Kurt Greaves commented on CASSANDRA-5836:
-

bq. But the above only holds in the context of the default auto_bootstrap=true 
setting. If we require that it is set to false when deploying new clusters/DCs, 
the problem goes away and we don't need a special case for the very first node.
I don't think this is necessary. My patch above should only make the very first 
node in a cluster a special case, all new seeds regardless of DC would not have 
the bootstrap problem. We wouldn't want to be overriding the behaviour of 
{{autobootstrap}} for these cases anyway. 

bq. The only case I see when SimpleStrategy can actually work with multiple DCs 
is when you start multi-DC from scratch. The auth keyspace you will want to 
change to use NTS and replicate to all DCs, but you might not care about the 
other two non-local system keyspaces.

SimpleStrategy does not care about DC's, it only cares about token order in the 
ring. Whether or not it makes sense for adding DC's, we currently create 3 
keyspaces using SimpleStrategy. I don't think it's acceptable to say you have 
to change these to NTS prior to adding a new DC. It's perfectly acceptable to 
use SimpleStrategy for {{system_traces}} and {{system_distributed}}. 
{{system_auth}} also works but it's a bad idea.

bq. But are you referring here to a case where you would add a new DC to a 
cluster with data already in the original DC and still using SimpleStrategy, 
Kurt Greaves? To me that doesn't seem to be practical. 
bq. Any reason why would you want to go this way instead of the proper nodetool 
rebuild?
Yes. Not that the data in those keyspaces is terribly important to me, but it 
might be to some people. Using nodetool rebuild when you've used SimpleStrategy 
will end up with data loss/won't work.

bq.  how do you make new seeds bootstrap?
If only the first seed is special this is not a problem. Other seeds can 
bootstrap if they so desire (auto_bootstrap: true).

> Seed nodes should be able to bootstrap without manual intervention
> --
>
> Key: CASSANDRA-5836
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5836
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bill Hathaway
>Priority: Minor
>
> The current logic doesn't allow a seed node to be bootstrapped.  If a user 
> wants to bootstrap a node configured as a seed (for example to replace a seed 
> node via replace_token), they first need to remove the node's own IP from the 
> seed list, and then start the bootstrap process.  This seems like an 
> unnecessary step since a node never uses itself as a seed.
> I think it would be a better experience if the logic was changed to allow a 
> seed node to bootstrap without manual intervention when there are other seed 
> nodes up in a ring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-03-06 Thread Vinay Chella (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388931#comment-16388931
 ] 

Vinay Chella commented on CASSANDRA-12151:
--

Thanks everyone for your inputs. I am trying to summarize the discussion here 
so that I can take action items and either incorporate in this patch or create 
separate JIRAs to track.

1. Simple and incremental approach
2. Reuse the BinLog/ Chronicle work that is put in CASSANDRA-13983 so that we 
don't duplicate efforts and achieve asynchronous, efficient logging
3. Provide context needed in AuditLogger interfaces so that we could have 
various implementations based on logging needs (e.g., more information to log, 
etc.,)
4. Filter/ whitelist users for AuditLog events.
5. Pluggable component that we can fit into the client-facing netty pipeline
6. Code review comments from Dinesh and Jaydeepkumar 
7. Auditlog to log percentage of queries instead of logging every query

I am currently working on #2, #3, #4 and #6. I will gather more data on #5, #7 
and create separate JIRAs as needed. To satisfy #1, we are trying best to get 
the simple version(simple/ basic configs) out, take feedback and improve on in 
later versions/ patches.

I am also planning to stress test this patch and publish numbers with and 
without AuditLog so that user knows the cost of this feature.

Short story: Everyone wants C* audit logging, but we also know it has a 
non-trivial cost in terms of performance (not a free lunch). The debate is 
around what level of details should go in the log, what should be configurable, 
which can be addressed with #1 approach (incremental approach). However, 
design(Interface level details) feedback and redundant 
efforts(BinLog/Chronicle) are being addressed in this patch.

Hope this summary helps. Please let me know if I am missing something important.

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388905#comment-16388905
 ] 

ASF GitHub Bot commented on CASSANDRA-11381:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra-dtest/pull/19


> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 2.2.10, 3.0.14, 3.11.0, 4.0
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: New test for CASSANDRA-11381: Node running with join_ring=false and authentication can not serve requests

2018-03-06 Thread mck
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 8fa87f63d -> 7f5d9c0f3


New test for CASSANDRA-11381: Node running with join_ring=false and 
authentication can not serve requests

Patch by Mick Semb Wever; Reviewed by Philip Thompson; for CASSANDRA-11381


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/7f5d9c0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/7f5d9c0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/7f5d9c0f

Branch: refs/heads/master
Commit: 7f5d9c0f34f782aa8fa041e6408400152d64e533
Parents: 8fa87f6
Author: Mick Semb Wever 
Authored: Wed Mar 7 13:07:21 2018 +1100
Committer: Mick Semb Wever 
Committed: Wed Mar 7 13:07:21 2018 +1100

--
 auth_join_ring_false_test.py | 212 ++
 1 file changed, 212 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/7f5d9c0f/auth_join_ring_false_test.py
--
diff --git a/auth_join_ring_false_test.py b/auth_join_ring_false_test.py
new file mode 100644
index 000..34e2b4b
--- /dev/null
+++ b/auth_join_ring_false_test.py
@@ -0,0 +1,212 @@
+import pytest
+
+from cassandra import AuthenticationFailed, Unauthorized
+from cassandra.cluster import NoHostAvailable
+
+from dtest import Tester
+
+
+class TestAuth(Tester):
+
+
+def test_login_existing_node(self):
+"""
+* Launch a three node cluster
+* Restart the third node in `join_ring=false` mode
+* Connect as the default user/password
+* Verify that default user w/ bad password gives AuthenticationFailed 
exception
+* Verify that bad user gives AuthenticationFailed exception
+"""
+# also tests default user creation (cassandra/cassandra)
+self.prepare(nodes=3)
+node1, node2, node3 = self.cluster.nodelist()
+node3.stop(wait_other_notice=True)
+node3.start(join_ring=False, wait_other_notice=False, 
wait_for_binary_proto=True)
+
+self.patient_exclusive_cql_connection(node=node3, user='cassandra', 
password='cassandra')
+try:
+self.patient_exclusive_cql_connection(node=node3, 
user='cassandra', password='badpassword')
+except NoHostAvailable as e:
+assert isinstance(list(e.errors.values())[0], AuthenticationFailed)
+try:
+self.patient_exclusive_cql_connection(node=node3, 
user='doesntexist', password='doesntmatter')
+except NoHostAvailable as e:
+assert isinstance(list(e.errors.values())[0], AuthenticationFailed)
+
+def test_login_new_node(self):
+"""
+* Launch a two node cluster
+* Add a third node in `join_ring=false` mode
+* Connect as the default user/password
+* Verify that default user w/ bad password gives AuthenticationFailed 
exception
+* Verify that bad user gives AuthenticationFailed exception
+"""
+# also tests default user creation (cassandra/cassandra)
+self.prepare(nodes=2)
+
+node3 = self.cluster.create_node('node3', False,
+('127.0.0.3', 9160),
+('127.0.0.3', 7000),
+'7300', '2002', None,
+binary_interface=('127.0.0.3', 9042))
+
+self.cluster.add(node3, False)
+node3.start(join_ring=False, wait_other_notice=False, 
wait_for_binary_proto=True)
+
+self.patient_exclusive_cql_connection(node=node3, user='cassandra', 
password='cassandra')
+try:
+self.patient_exclusive_cql_connection(node=node3, 
user='cassandra', password='badpassword')
+except NoHostAvailable as e:
+assert isinstance(list(e.errors.values())[0], AuthenticationFailed)
+try:
+self.patient_exclusive_cql_connection(node=node3, 
user='doesntexist', password='doesntmatter')
+except NoHostAvailable as e:
+assert isinstance(list(e.errors.values())[0], AuthenticationFailed)
+
+def test_list_users(self):
+"""
+* Launch a one node cluster
+* Connect as the default superuser
+* Create two new users, and two new superusers.
+* Verify that LIST USERS shows all five users.
+* Verify that the correct users are listed as super users.
+* Add a second node in `join_ring=false` mode
+* Connect (through the non-ring node) as one of the new users, and 
check that the LIST USERS behavior is also correct there.
+"""
+self.prepare()
+
+session = self.get_session(user='cassandra', password='cassandra')
+

[jira] [Updated] (CASSANDRA-14292) Batch commitlog performance regression in 3.0.16

2018-03-06 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14292:
-
Attachment: 14292-3.0-unittest.png
14292-3.0-dtest.png

> Batch commitlog performance regression in 3.0.16
> 
>
> Key: CASSANDRA-14292
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14292
> Project: Cassandra
>  Issue Type: Bug
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Major
> Fix For: 3.0.x
>
> Attachments: 14292-3.0-dtest.png, 14292-3.0-unittest.png
>
>
> Prior to CASSANDRA-13987, in batch commitlog mode, commitlog will be synced 
> to disk right after mutation comes.
>  * haveWork semaphore is released in BatchCommitLogService.maybeWaitForSync
>  * AbstractCommitlogService will continue and sync to disk
> After C-13987, it makes a branch for chain maker flush more frequently in 
> periodic mode. To make sure in batch mode CL still flushes immediately, it 
> added {{syncRequested}} flag.
>  Unfortunately, in 3.0 branch, this flag is not being set to true when 
> mutation is waiting.
> So in AbstractCommitlogService, it will not execute the CL sync branch until 
> it reaches sync window(2ms)..
> {code:java|title=AbstractCommitLogService.java}
> if (lastSyncedAt + syncIntervalMillis <= pollStarted || shutdown || 
> syncRequested)
> {
> // in this branch, we want to flush the commit log to disk
> syncRequested = false;
> commitLog.sync(shutdown, true);
> lastSyncedAt = pollStarted;
> syncComplete.signalAll();
> }
> else
> {
> // in this branch, just update the commit log sync headers
> commitLog.sync(false, false);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388839#comment-16388839
 ] 

ASF GitHub Bot commented on CASSANDRA-11381:


Github user michaelsembwever commented on the issue:

https://github.com/apache/cassandra-dtest/pull/19
  
>  I believe pep8 will complain about the lack of a new line at the bottom

I did not know about that! Fixed.

Will merge.


> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 2.2.10, 3.0.14, 3.11.0, 4.0
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14292) Batch commitlog performance regression in 3.0.16

2018-03-06 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388763#comment-16388763
 ] 

Jason Brown commented on CASSANDRA-14292:
-

[~jasonstack] Yeah, you are correct; I missed this case. This is the problem 
when working on the same patch across four branches; I'm sure I had something 
like this at one point in a 3.0 branch  

I'm +1 on the patch, as well. Thanks for the unit test - I like it enough that 
I'll forward port it to 3.11 and trunk when I commit.

> Batch commitlog performance regression in 3.0.16
> 
>
> Key: CASSANDRA-14292
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14292
> Project: Cassandra
>  Issue Type: Bug
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Major
> Fix For: 3.0.x
>
>
> Prior to CASSANDRA-13987, in batch commitlog mode, commitlog will be synced 
> to disk right after mutation comes.
>  * haveWork semaphore is released in BatchCommitLogService.maybeWaitForSync
>  * AbstractCommitlogService will continue and sync to disk
> After C-13987, it makes a branch for chain maker flush more frequently in 
> periodic mode. To make sure in batch mode CL still flushes immediately, it 
> added {{syncRequested}} flag.
>  Unfortunately, in 3.0 branch, this flag is not being set to true when 
> mutation is waiting.
> So in AbstractCommitlogService, it will not execute the CL sync branch until 
> it reaches sync window(2ms)..
> {code:java|title=AbstractCommitLogService.java}
> if (lastSyncedAt + syncIntervalMillis <= pollStarted || shutdown || 
> syncRequested)
> {
> // in this branch, we want to flush the commit log to disk
> syncRequested = false;
> commitLog.sync(shutdown, true);
> lastSyncedAt = pollStarted;
> syncComplete.signalAll();
> }
> else
> {
> // in this branch, just update the commit log sync headers
> commitLog.sync(false, false);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14070) Add new method for returning list of primary/clustering key values

2018-03-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14070:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Going to be the third "no" vote here - encourage you to do this within your 
trigger code.

> Add new method for returning list of primary/clustering key values
> --
>
> Key: CASSANDRA-14070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14070
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Himani Arora
>Assignee: Himani Arora
>Priority: Minor
> Fix For: 4.x
>
>
> Add a method to return a list of primary/clustering key values so that it 
> will be easier to process data. Currently, we are getting a string 
> concatenated with either colon (: ) or comma (,) which makes it quite 
> difficult to fetch one single key value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14070) Add new method for returning list of primary/clustering key values

2018-03-06 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388568#comment-16388568
 ] 

Blake Eggleston commented on CASSANDRA-14070:
-

Agreed that C* probably isn't the best place for this.

> Add new method for returning list of primary/clustering key values
> --
>
> Key: CASSANDRA-14070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14070
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Himani Arora
>Assignee: Himani Arora
>Priority: Minor
> Fix For: 4.x
>
>
> Add a method to return a list of primary/clustering key values so that it 
> will be easier to process data. Currently, we are getting a string 
> concatenated with either colon (: ) or comma (,) which makes it quite 
> difficult to fetch one single key value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently

2018-03-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-12526:
--

Assignee: Ariel Weisberg

> For LCS, single SSTable up-level is handled inefficiently
> -
>
> Key: CASSANDRA-12526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Ariel Weisberg
>Priority: Major
>  Labels: compaction, lcs, performance
> Fix For: 4.x
>
>
> I'm using the latest trunk (as of August 2016, which probably is going to be 
> 3.10) to run some experiments on LeveledCompactionStrategy and noticed this 
> inefficiency.
> The test data is generated using cassandra-stress default parameters 
> (keyspace1.standard1), so as you can imagine, it consists of a ton of newly 
> inserted partitions that will never merge in compactions, which is probably 
> the worst kind of workload for LCS (however, I'll detail later why this 
> scenario should not be ignored as a corner case; for now, let's just assume 
> we still want to handle this scenario efficiently).
> After the compaction test is done, I scrubbed debug.log for patterns that 
> match  the "Compacted" summary so that I can see how long each individual 
> compaction took and how many bytes they processed. The search pattern is like 
> the following:
> {noformat}
> grep 'Compacted.*standard1' debug.log
> {noformat}
> Interestingly, I noticed a lot of the finished compactions are marked as 
> having *only one* SSTable involved. With the workload mentioned above, the 
> "single SSTable" compactions actually consist of the majority of all 
> compactions (as shown below), so its efficiency can affect the overall 
> compaction throughput quite a bit.
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | wc -l
> 243
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | grep ") 1 sstable" | wc -l
> 218
> {noformat}
> By looking at the code, it appears that there's a way to directly edit the 
> level of a particular SSTable like the following:
> {code}
> sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, 
> targetLevel);
> sstable.reloadSSTableMetadata();
> {code}
> To be exact, I summed up the time spent for these single-SSTable compactions 
> (the total data size is 60GB) and found that if each compaction only needs to 
> spend 100ms for only the metadata change (instead of the 10+ second they're 
> doing now), it can already achieve 22.75% saving on total compaction time.
> Compared to what we have now (reading the whole single-SSTable from old level 
> and writing out the same single-SSTable at the new level), the only 
> difference I could think of by using this approach is that the new SSTable 
> will have the same file name (sequence number) as the old one's, which could 
> break some assumptions on some other part of the code. However, not having to 
> go through the full read/write IO, and not having to bear the overhead of 
> cleaning up the old file, creating the new file, creating more churns in heap 
> and file buffer, it seems the benefits outweigh the inconvenience. So I'd 
> argue this JIRA belongs to LHF and should be made available in 3.0.x as well.
> As mentioned in the 2nd paragraph, I'm also going to address why this kind of 
> all-new-partition workload should not be ignored as a corner case. Basically, 
> for the main use case of LCS where you need to frequently merge partitions to 
> optimize read and eliminate tombstones and expired data sooner, LCS can be 
> perfectly happy and efficiently perform the partition merge and tombstone 
> elimination for a long time. However, as soon as the node becomes a bit 
> unhealthy for various reasons (could be a bad disk so it's missing a whole 
> bunch of mutations and need repair, could be the user chooses to ingest way 
> more data than it usually takes and exceeds its capability, or god-forbidden, 
> some DBA chooses to run offline sstablelevelreset), you will have to handle 
> this kind of "all-new-partition with a lot of SSTables in L0" scenario, and 
> once all L0 SSTables finally gets up-leveled to L1, you will likely see a lot 
> of such single-SSTable compactions, which is the situation this JIRA is 
> intended to address.
> Actually, when I think more about this, to make this kind of single SSTable 
> up-level more efficient will not only help the all-new-partition scenario, 
> but also help in general any time when there is a big backlog of L0 SSTables 
> due to too many flushes or excessive repair streaming with vnode. In those 
> situations, by default 

[jira] [Commented] (CASSANDRA-14087) NPE when CAS encounters empty frozen collection

2018-03-06 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388179#comment-16388179
 ] 

Blake Eggleston commented on CASSANDRA-14087:
-

started another round of tests:
|3.0|3.11|trunk|
|[tests|https://circleci.com/workflow-run/79cca9f9-fb55-4a6e-b53d-862018222bc8]|[tests|https://circleci.com/workflow-run/eac51382-1994-4218-ab7b-ea6e466580f2]|[tests|https://circleci.com/workflow-run/c932587d-a6e8-463e-881b-c28bb76bc81b]|

> NPE when CAS encounters empty frozen collection
> ---
>
> Key: CASSANDRA-14087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14087
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jens Bannmann
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> When a compare-and-set operation specifying an equality criterion with a 
> non-{{null}} value encounters an empty collection ({{null}} cell), the server 
> throws a {{NullPointerException}} and the query fails.
> This does not happen for non-frozen collections.
> There's a self-contained test case at 
> [github|https://github.com/incub8/cassandra-npe-in-cas].
> The stack trace for 3.11.0 is:
> {code}
> ERROR [Native-Transport-Requests-1] 2017-11-27 12:59:26,924 
> QueryMessage.java:129 - Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.cql3.ColumnCondition$CollectionBound.appliesTo(ColumnCondition.java:546)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest$ColumnsConditions.appliesTo(CQL3CasRequest.java:324)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.appliesTo(CQL3CasRequest.java:210)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.service.StorageProxy.cas(StorageProxy.java:265) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeWithCondition(ModificationStatement.java:441)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:416)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_151]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13474) Cassandra pluggable storage engine

2018-03-06 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-13474:
--
Description: 
Instagram is working on a project to significantly reduce Cassandra's tail 
latency, by implementing a new storage engine on top of RocksDB, named 
Rocksandra.

We started a prototype of single column (key-value) use case, and then 
implemented a full design to support most of the data types and data models in 
Cassandra, as well as streaming.

After a year of development and testing, we have rolled out the Rocksandra 
project to our internal deployments, and observed 3-4X reduction on P99 read 
latency in general, even more than 10 times reduction for some use cases.

We published a blog post about the wins and the benchmark metrics on AWS 
environment. 
https://engineering.instagram.com/open-sourcing-a-10x-reduction-in-apache-cassandra-tail-latency-d64f86b43589

I think the biggest performance win comes from we get rid of most Java garbages 
created by current read/write path and compactions, which reduces the JVM 
overhead and makes the latency to be more predictable.

We are very excited about the potential performance gain. As the next step, I 
propose to make the Cassandra storage engine to be pluggable (like Mysql and 
MongoDB), and we are very interested in providing RocksDB as one storage option 
with more predictable performance, together with community.

Design doc for pluggable storage engine: 
https://docs.google.com/document/d/1suZlvhzgB6NIyBNpM9nxoHxz_Ri7qAm-UEO8v8AIFsc/edit

  was:
We did some experiment to switch Cassandra's storage engine to RocksDB.

In the experiment, I built a prototype to integrate Cassandra 3.0.12 and 
RocksDB on single column (key-value) use case, shadowed one of our production 
use case, and saw about 4-6X P99 read latency drop during peak time, compared 
to 3.0.12. Also, the P99 latency became more predictable as well.

Here is detailed note with more metrics:

[https://docs.google.com/document/d/1Ztqcu8Jzh4USKoWBgDJQw82DBurQmsV-PmfiJYvu_Dc/edit?usp=sharing]

I think the biggest latency win comes from we get rid of most Java garbages 
created by current read/write path and compactions, which reduces the JVM 
overhead and makes the latency to be more predictable.

We are very excited about the potential performance gain. As the next step, I 
propose to make the Cassandra storage engine to be pluggable (like Mysql and 
MongoDB), and we are very interested in providing RocksDB as one storage option 
with more predictable performance, together with community.

Design doc for pluggable storage engine: 
https://docs.google.com/document/d/1suZlvhzgB6NIyBNpM9nxoHxz_Ri7qAm-UEO8v8AIFsc/edit


> Cassandra pluggable storage engine
> --
>
> Key: CASSANDRA-13474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13474
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Dikang Gu
>Priority: Major
>
> Instagram is working on a project to significantly reduce Cassandra's tail 
> latency, by implementing a new storage engine on top of RocksDB, named 
> Rocksandra.
> We started a prototype of single column (key-value) use case, and then 
> implemented a full design to support most of the data types and data models 
> in Cassandra, as well as streaming.
> After a year of development and testing, we have rolled out the Rocksandra 
> project to our internal deployments, and observed 3-4X reduction on P99 read 
> latency in general, even more than 10 times reduction for some use cases.
> We published a blog post about the wins and the benchmark metrics on AWS 
> environment. 
> https://engineering.instagram.com/open-sourcing-a-10x-reduction-in-apache-cassandra-tail-latency-d64f86b43589
> I think the biggest performance win comes from we get rid of most Java 
> garbages created by current read/write path and compactions, which reduces 
> the JVM overhead and makes the latency to be more predictable.
> We are very excited about the potential performance gain. As the next step, I 
> propose to make the Cassandra storage engine to be pluggable (like Mysql and 
> MongoDB), and we are very interested in providing RocksDB as one storage 
> option with more predictable performance, together with community.
> Design doc for pluggable storage engine: 
> https://docs.google.com/document/d/1suZlvhzgB6NIyBNpM9nxoHxz_Ri7qAm-UEO8v8AIFsc/edit



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14292) Batch commitlog performance regression in 3.0.16

2018-03-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388039#comment-16388039
 ] 

ZhaoYang commented on CASSANDRA-14292:
--

It’s 3.0.16 only..

> Batch commitlog performance regression in 3.0.16
> 
>
> Key: CASSANDRA-14292
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14292
> Project: Cassandra
>  Issue Type: Bug
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Major
> Fix For: 3.0.x
>
>
> Prior to CASSANDRA-13987, in batch commitlog mode, commitlog will be synced 
> to disk right after mutation comes.
>  * haveWork semaphore is released in BatchCommitLogService.maybeWaitForSync
>  * AbstractCommitlogService will continue and sync to disk
> After C-13987, it makes a branch for chain maker flush more frequently in 
> periodic mode. To make sure in batch mode CL still flushes immediately, it 
> added {{syncRequested}} flag.
>  Unfortunately, in 3.0 branch, this flag is not being set to true when 
> mutation is waiting.
> So in AbstractCommitlogService, it will not execute the CL sync branch until 
> it reaches sync window(2ms)..
> {code:java|title=AbstractCommitLogService.java}
> if (lastSyncedAt + syncIntervalMillis <= pollStarted || shutdown || 
> syncRequested)
> {
> // in this branch, we want to flush the commit log to disk
> syncRequested = false;
> commitLog.sync(shutdown, true);
> lastSyncedAt = pollStarted;
> syncComplete.signalAll();
> }
> else
> {
> // in this branch, just update the commit log sync headers
> commitLog.sync(false, false);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14292) Batch commitlog performance regression in 3.0.16

2018-03-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388033#comment-16388033
 ] 

Ariel Weisberg commented on CASSANDRA-14292:


[~jasobrown] looks like it wasn't registering to receive notification of the 
sync before requesting the sync. It's another way for {{syncRequested}} to get 
clobbered.

I'm +1 on this change. Is it a 3.11 bug as well?

> Batch commitlog performance regression in 3.0.16
> 
>
> Key: CASSANDRA-14292
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14292
> Project: Cassandra
>  Issue Type: Bug
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Major
> Fix For: 3.0.x
>
>
> Prior to CASSANDRA-13987, in batch commitlog mode, commitlog will be synced 
> to disk right after mutation comes.
>  * haveWork semaphore is released in BatchCommitLogService.maybeWaitForSync
>  * AbstractCommitlogService will continue and sync to disk
> After C-13987, it makes a branch for chain maker flush more frequently in 
> periodic mode. To make sure in batch mode CL still flushes immediately, it 
> added {{syncRequested}} flag.
>  Unfortunately, in 3.0 branch, this flag is not being set to true when 
> mutation is waiting.
> So in AbstractCommitlogService, it will not execute the CL sync branch until 
> it reaches sync window(2ms)..
> {code:java|title=AbstractCommitLogService.java}
> if (lastSyncedAt + syncIntervalMillis <= pollStarted || shutdown || 
> syncRequested)
> {
> // in this branch, we want to flush the commit log to disk
> syncRequested = false;
> commitLog.sync(shutdown, true);
> lastSyncedAt = pollStarted;
> syncComplete.signalAll();
> }
> else
> {
> // in this branch, just update the commit log sync headers
> commitLog.sync(false, false);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14251) View replica is not written to pending endpoint when base replica is also view replica

2018-03-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14251:

   Resolution: Fixed
 Reviewer: ZhaoYang
Reproduced In: 3.11.1, 3.0.15  (was: 3.0.15, 3.11.1)
   Status: Resolved  (was: Patch Available)

Updated {{NEWS.TXT}} wording to state that potentially affected users must run 
repair in the base table and subsequently on the views, after [mailing list 
discussion|https://www.mail-archive.com/dev@cassandra.apache.org/msg12128.html].

Committed fix as {{c67338989f17257d3be95212ca6ecb4b83009326}} to cassandra-3.0 
and merged up to cassandra-3.11 and trunk and dtest as 
{{8fa87f63dec7dd636473b620071d264893a19df8}}. Thanks for the review 
[~jasonstack].

> View replica is not written to pending endpoint when base replica is also 
> view replica
> --
>
> Key: CASSANDRA-14251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> From the [dev 
> list|https://www.mail-archive.com/dev@cassandra.apache.org/msg12084.html]:
> bq. There's an optimization that when we're lucky enough that the paired view 
> replica is the same as this base replica, mutateMV doesn't use the normal 
> view-mutation-sending code (wrapViewBatchResponseHandler) and just writes the 
> mutation locally. In particular, in this case we do NOT write to the pending 
> node (unless I'm missing something). But, sometimes all replicas will be 
> paired with themselves - this can happen for example when number of nodes is 
> equal to RF, or when the base and view table have the same partition keys 
> (but different clustering keys). In this case, it seems the pending node will 
> not be written at all...
> This was a regression from CASSANDRA-13069 and the original behavior should 
> be restored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-03-06 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4b4c05e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4b4c05e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4b4c05e0

Branch: refs/heads/trunk
Commit: 4b4c05e0de3f49a29aae80b6b8f9537626d996b5
Parents: a7a36e7 a060698
Author: Paulo Motta 
Authored: Tue Mar 6 11:21:38 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:24:14 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 8 
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 14 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b4c05e0/CHANGES.txt
--
diff --cc CHANGES.txt
index 5fefc02,ff726a9..2b2baea
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,208 -1,8 +1,209 @@@
 +4.0
 + * Use Murmur3 for validation compactions (CASSANDRA-14002)
 + * Comma at the end of the seed list is interpretated as localhost 
(CASSANDRA-14285)
 + * Refactor read executor and response resolver, abstract read repair 
(CASSANDRA-14058)
 + * Add optional startup delay to wait until peers are ready (CASSANDRA-13993)
 + * Add a few options to nodetool verify (CASSANDRA-14201)
 + * CVE-2017-5929 Security vulnerability and redefine default log rotation 
policy (CASSANDRA-14183)
 + * Use JVM default SSL validation algorithm instead of custom default 
(CASSANDRA-13259)
 + * Better document in code InetAddressAndPort usage post 7544, incorporate 
port into UUIDGen node (CASSANDRA-14226)
 + * Fix sstablemetadata date string for minLocalDeletionTime (CASSANDRA-14132)
 + * Make it possible to change neverPurgeTombstones during runtime 
(CASSANDRA-14214)
 + * Remove GossipDigestSynVerbHandler#doSort() (CASSANDRA-14174)
 + * Add nodetool clientlist (CASSANDRA-13665)
 + * Revert ProtocolVersion changes from CASSANDRA-7544 (CASSANDRA-14211)
 + * Non-disruptive seed node list reload (CASSANDRA-14190)
 + * Nodetool tablehistograms to print statics for all the tables 
(CASSANDRA-14185)
 + * Migrate dtests to use pytest and python3 (CASSANDRA-14134)
 + * Allow storage port to be configurable per node (CASSANDRA-7544)
 + * Make sub-range selection for non-frozen collections return null instead of 
empty (CASSANDRA-14182)
 + * BloomFilter serialization format should not change byte ordering 
(CASSANDRA-9067)
 + * Remove unused on-heap BloomFilter implementation (CASSANDRA-14152)
 + * Delete temp test files on exit (CASSANDRA-14153)
 + * Make PartitionUpdate and Mutation immutable (CASSANDRA-13867)
 + * Fix CommitLogReplayer exception for CDC data (CASSANDRA-14066)
 + * Fix cassandra-stress startup failure (CASSANDRA-14106)
 + * Remove initialDirectories from CFS (CASSANDRA-13928)
 + * Fix trivial log format error (CASSANDRA-14015)
 + * Allow sstabledump to do a json object per partition (CASSANDRA-13848)
 + * Add option to optimise merkle tree comparison across replicas 
(CASSANDRA-3200)
 + * Remove unused and deprecated methods from AbstractCompactionStrategy 
(CASSANDRA-14081)
 + * Fix Distribution.average in cassandra-stress (CASSANDRA-14090)
 + * Support a means of logging all queries as they were invoked 
(CASSANDRA-13983)
 + * Presize collections (CASSANDRA-13760)
 + * Add GroupCommitLogService (CASSANDRA-13530)
 + * Parallelize initial materialized view build (CASSANDRA-12245)
 + * Fix flaky SecondaryIndexManagerTest.assert[Not]MarkedAsBuilt 
(CASSANDRA-13965)
 + * Make LWTs send resultset metadata on every request (CASSANDRA-13992)
 + * Fix flaky indexWithFailedInitializationIsNotQueryableAfterPartialRebuild 
(CASSANDRA-13963)
 + * Introduce leaf-only iterator (CASSANDRA-9988)
 + * Upgrade Guava to 23.3 and Airline to 0.8 (CASSANDRA-13997)
 + * Allow only one concurrent call to StatusLogger (CASSANDRA-12182)
 + * Refactoring to specialised functional interfaces (CASSANDRA-13982)
 + * Speculative retry should allow more friendly params (CASSANDRA-13876)
 + * Throw exception if we send/receive repair messages to incompatible nodes 
(CASSANDRA-13944)
 + * Replace usages of MessageDigest with Guava's Hasher (CASSANDRA-13291)
 + * Add nodetool cmd to print hinted handoff window (CASSANDRA-13728)
 + * Fix some alerts raised by static analysis (CASSANDRA-13799)
 + * Checksum sstable metadata (CASSANDRA-13321, CASSANDRA-13593)
 + * Add result set metadata to prepared statement MD5 hash calculation 
(CASSANDRA-10786)
 + * Refactor GcCompactionTest to avoid boxing (CASSANDRA-13941)
 + * Expose recent histograms in JmxHistograms (CASSANDRA-13642)
 + * Fix buffer length comparison when decompressing in 

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-03-06 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a060698c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a060698c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a060698c

Branch: refs/heads/cassandra-3.11
Commit: a060698c55797d8f92db290ea2fe209433dc7b3f
Parents: 7f02348 c673389
Author: Paulo Motta 
Authored: Tue Mar 6 11:14:49 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:15:15 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 8 
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 14 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/CHANGES.txt
--
diff --cc CHANGES.txt
index a4be758,ad558de..ff726a9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,5 +1,8 @@@
 -3.0.17
 +3.11.3
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
   * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/NEWS.txt
--
diff --cc NEWS.txt
index 445623e,64de28a..745dad2
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -42,8 -42,17 +42,16 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -3.0.17
++3.11.3
+ =
+ 
+ Upgrading
+ -
 -- Materialized view users upgrading from 3.0.15 or later that have 
performed range movements (join, decommission, move, etc),
 -  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly
 -  propagated to all replicas.
++- Materialized view users upgrading from 3.0.15 (3.0.X series) or 3.11.1 
(3.11.X series) and  later that have performed range movements (join, 
decommission, move, etc),
++  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly propagated to all replicas.
+ 
 -3.0.16
 -=
 +3.11.2
 +==
  
  Upgrading
  -

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index e67d46e,7a6bed4..4dc05e3
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -795,9 -759,11 +795,12 @@@ public class StorageProxy implements St
  continue;
  }
  
- // When local node is the paired endpoint just apply the 
mutation locally.
- if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
+ // When local node is the endpoint we can just apply the 
mutation locally,
+ // unless there are pending endpoints, in which case we 
want to do an ordinary
+ // write so the view mutation is sent to the pending 
endpoint
+ if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined()
+ && pendingEndpoints.isEmpty())
 +{
  try
  {
  mutation.apply(writeCommitLog);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Write to pending endpoint when view replica is also base replica

2018-03-06 Thread paulo
Write to pending endpoint when view replica is also base replica

Patch by Paulo Motta; Reviewed by Zhao Yang for CASSANDRA-14251


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6733898
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6733898
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6733898

Branch: refs/heads/trunk
Commit: c67338989f17257d3be95212ca6ecb4b83009326
Parents: 85fafd0
Author: Paulo Motta 
Authored: Wed Feb 21 19:55:41 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:14:18 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 9 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9734507..ad558de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
  * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
  * Fully utilise specified compaction threads (CASSANDRA-14210)
  * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d282b22..64de28a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -42,6 +42,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.17
+=
+
+Upgrading
+-
+- Materialized view users upgrading from 3.0.15 or later that have 
performed range movements (join, decommission, move, etc),
+  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly
+  propagated to all replicas.
+
 3.0.16
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index e380a3f..7a6bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -759,8 +759,11 @@ public class StorageProxy implements StorageProxyMBean
 continue;
 }
 
-// When local node is the paired endpoint just apply the 
mutation locally.
-if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
+// When local node is the endpoint we can just apply the 
mutation locally,
+// unless there are pending endpoints, in which case we 
want to do an ordinary
+// write so the view mutation is sent to the pending 
endpoint
+if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined()
+&& pendingEndpoints.isEmpty())
 try
 {
 mutation.apply(writeCommitLog);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Write to pending endpoint when view replica is also base replica

2018-03-06 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 85fafd0c1 -> c67338989
  refs/heads/cassandra-3.11 7f02348ec -> a060698c5
  refs/heads/trunk a7a36e703 -> 4b4c05e0d


Write to pending endpoint when view replica is also base replica

Patch by Paulo Motta; Reviewed by Zhao Yang for CASSANDRA-14251


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6733898
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6733898
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6733898

Branch: refs/heads/cassandra-3.0
Commit: c67338989f17257d3be95212ca6ecb4b83009326
Parents: 85fafd0
Author: Paulo Motta 
Authored: Wed Feb 21 19:55:41 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:14:18 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 9 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9734507..ad558de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
  * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
  * Fully utilise specified compaction threads (CASSANDRA-14210)
  * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d282b22..64de28a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -42,6 +42,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.17
+=
+
+Upgrading
+-
+- Materialized view users upgrading from 3.0.15 or later that have 
performed range movements (join, decommission, move, etc),
+  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly
+  propagated to all replicas.
+
 3.0.16
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index e380a3f..7a6bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -759,8 +759,11 @@ public class StorageProxy implements StorageProxyMBean
 continue;
 }
 
-// When local node is the paired endpoint just apply the 
mutation locally.
-if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
+// When local node is the endpoint we can just apply the 
mutation locally,
+// unless there are pending endpoints, in which case we 
want to do an ordinary
+// write so the view mutation is sent to the pending 
endpoint
+if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined()
+&& pendingEndpoints.isEmpty())
 try
 {
 mutation.apply(writeCommitLog);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Write to pending endpoint when view replica is also base replica

2018-03-06 Thread paulo
Write to pending endpoint when view replica is also base replica

Patch by Paulo Motta; Reviewed by Zhao Yang for CASSANDRA-14251


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6733898
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6733898
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6733898

Branch: refs/heads/cassandra-3.11
Commit: c67338989f17257d3be95212ca6ecb4b83009326
Parents: 85fafd0
Author: Paulo Motta 
Authored: Wed Feb 21 19:55:41 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:14:18 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 9 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9734507..ad558de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
  * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
  * Fully utilise specified compaction threads (CASSANDRA-14210)
  * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index d282b22..64de28a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -42,6 +42,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+3.0.17
+=
+
+Upgrading
+-
+- Materialized view users upgrading from 3.0.15 or later that have 
performed range movements (join, decommission, move, etc),
+  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly
+  propagated to all replicas.
+
 3.0.16
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6733898/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index e380a3f..7a6bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -759,8 +759,11 @@ public class StorageProxy implements StorageProxyMBean
 continue;
 }
 
-// When local node is the paired endpoint just apply the 
mutation locally.
-if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
+// When local node is the endpoint we can just apply the 
mutation locally,
+// unless there are pending endpoints, in which case we 
want to do an ordinary
+// write so the view mutation is sent to the pending 
endpoint
+if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined()
+&& pendingEndpoints.isEmpty())
 try
 {
 mutation.apply(writeCommitLog);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-03-06 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a060698c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a060698c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a060698c

Branch: refs/heads/trunk
Commit: a060698c55797d8f92db290ea2fe209433dc7b3f
Parents: 7f02348 c673389
Author: Paulo Motta 
Authored: Tue Mar 6 11:14:49 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:15:15 2018 -0300

--
 CHANGES.txt | 1 +
 NEWS.txt| 8 
 src/java/org/apache/cassandra/service/StorageProxy.java | 7 +--
 3 files changed, 14 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/CHANGES.txt
--
diff --cc CHANGES.txt
index a4be758,ad558de..ff726a9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,5 +1,8 @@@
 -3.0.17
 +3.11.3
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
   * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/NEWS.txt
--
diff --cc NEWS.txt
index 445623e,64de28a..745dad2
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -42,8 -42,17 +42,16 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -3.0.17
++3.11.3
+ =
+ 
+ Upgrading
+ -
 -- Materialized view users upgrading from 3.0.15 or later that have 
performed range movements (join, decommission, move, etc),
 -  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly
 -  propagated to all replicas.
++- Materialized view users upgrading from 3.0.15 (3.0.X series) or 3.11.1 
(3.11.X series) and  later that have performed range movements (join, 
decommission, move, etc),
++  should run repair on the base tables, and subsequently on the views to 
ensure data affected by CASSANDRA-14251 is correctly propagated to all replicas.
+ 
 -3.0.16
 -=
 +3.11.2
 +==
  
  Upgrading
  -

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a060698c/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index e67d46e,7a6bed4..4dc05e3
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -795,9 -759,11 +795,12 @@@ public class StorageProxy implements St
  continue;
  }
  
- // When local node is the paired endpoint just apply the 
mutation locally.
- if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
+ // When local node is the endpoint we can just apply the 
mutation locally,
+ // unless there are pending endpoints, in which case we 
want to do an ordinary
+ // write so the view mutation is sent to the pending 
endpoint
+ if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined()
+ && pendingEndpoints.isEmpty())
 +{
  try
  {
  mutation.apply(writeCommitLog);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Add test for CASSANDRA-14251

2018-03-06 Thread paulo
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 2e0344c03 -> 8fa87f63d


Add test for CASSANDRA-14251


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/8fa87f63
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/8fa87f63
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/8fa87f63

Branch: refs/heads/master
Commit: 8fa87f63dec7dd636473b620071d264893a19df8
Parents: 2e0344c
Author: Paulo Motta 
Authored: Wed Feb 21 20:16:41 2018 -0300
Committer: Paulo Motta 
Committed: Tue Mar 6 11:22:33 2018 -0300

--
 materialized_views_test.py | 47 +
 1 file changed, 47 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/8fa87f63/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 8d38ee8..a723c4f 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -533,6 +533,53 @@ class TestMaterializedViews(Tester):
 for i in range(1000, 1100):
 assert_one(session, "SELECT * FROM t_by_v WHERE v = 
{}".format(-i), [-i, i])
 
+def test_insert_during_range_movement_rf1(self):
+self._base_test_insert_during_range_movement(rf=1)
+
+def test_insert_during_range_movement_rf2(self):
+self._base_test_insert_during_range_movement(rf=2)
+
+def test_insert_during_range_movement_rf3(self):
+self._base_test_insert_during_range_movement(rf=3)
+
+def _base_test_insert_during_range_movement(self, rf):
+"""
+@jira_ticket CASSANDRA-14251
+
+Test that materialized views replication work in the middle of a join
+for different replication factors.
+"""
+
+session = self.prepare(rf=rf)
+
+logger.debug("Creating table and view")
+
+session.execute("CREATE TABLE t (id int PRIMARY KEY, v int)")
+session.execute(("CREATE MATERIALIZED VIEW t_by_v AS SELECT * FROM t "
+ "WHERE v IS NOT NULL AND id IS NOT NULL PRIMARY KEY 
(v, id)"))
+
+logger.debug("Starting new node4 in write survey mode")
+node4 = new_node(self.cluster)
+# Set batchlog.replay_timeout_seconds=1 so we can ensure batchlog will 
be replayed below
+node4.start(wait_for_binary_proto=True, 
jvm_args=["-Dcassandra.write_survey=true",
+  
"-Dcassandra.batchlog.replay_timeout_in_ms=1"])
+
+logger.debug("Insert data while node4 is joining")
+
+for i in range(1000):
+session.execute("INSERT INTO t (id, v) VALUES ({id}, 
{v})".format(id=i, v=-i))
+
+logger.debug("Finish joining node4")
+node4.nodetool("join")
+
+logger.debug('Replay batchlogs')
+time.sleep(0.001)  # Wait batchlog.replay_timeout_in_ms=1 (ms)
+self._replay_batchlogs()
+
+logger.debug("Verify data")
+for i in range(1000):
+assert_one(session, "SELECT * FROM t_by_v WHERE v = 
{}".format(-i), [-i, i])
+
 @pytest.mark.resource_intensive
 def test_add_node_after_wide_mv_with_range_deletions(self):
 """


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14244) Some tests in read_repair_test are flakey

2018-03-06 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-14244:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Thanks, committed in 2e0344c0387310db7ed086743c3932c4a193d4bb

> Some tests in read_repair_test are flakey
> -
>
> Key: CASSANDRA-14244
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14244
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Major
>  Labels: dtest
>
> Since being refactored for CASSANDRA-14134, 
> {{test_alter_rf_and_run_read_repair}} and {{test_read_repair_chance}} in 
> {{read_repair_test.TestReadRepair}} are flakey and regularly fail on all 
> branches. The problem is that the inital setup of these two tests doesn't 
> explicitly set the {{read_repair_chance}} or {{dclocal_read_repair_chance}} 
> properties on the test table. As a consequence, read repairs are sometimes 
> probabilistically triggered and query results don't match the expectations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Make replica selection deterministic in read_repair_test

2018-03-06 Thread samt
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 7fd89e146 -> 2e0344c03


Make replica selection deterministic in read_repair_test

Patch by Sam Tunnicliffe; reviewed by Marcus Eriksson for CASSANDRA-14244

Closes #20


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/2e0344c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/2e0344c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/2e0344c0

Branch: refs/heads/master
Commit: 2e0344c0387310db7ed086743c3932c4a193d4bb
Parents: 7fd89e1
Author: Sam Tunnicliffe 
Authored: Tue Feb 20 16:20:23 2018 -0800
Committer: Sam Tunnicliffe 
Committed: Tue Mar 6 13:47:54 2018 +

--
 read_repair_test.py | 25 -
 1 file changed, 20 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/2e0344c0/read_repair_test.py
--
diff --git a/read_repair_test.py b/read_repair_test.py
index 7e8d405..5fbe1ba 100644
--- a/read_repair_test.py
+++ b/read_repair_test.py
@@ -1,3 +1,4 @@
+import os
 import time
 import pytest
 import logging
@@ -18,8 +19,20 @@ class TestReadRepair(Tester):
 
 @pytest.fixture(scope='function', autouse=True)
 def fixture_set_cluster_settings(self, fixture_dtest_setup):
-
fixture_dtest_setup.cluster.set_configuration_options(values={'hinted_handoff_enabled':
 False})
-
fixture_dtest_setup.cluster.populate(3).start(wait_for_binary_proto=True)
+cluster = fixture_dtest_setup.cluster
+cluster.populate(3)
+# disable dynamic snitch to make replica selection deterministic
+# when we use patient_exclusive_cql_connection, CL=1 and RF=n
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False,
+  'endpoint_snitch': 
'GossipingPropertyFileSnitch',
+  'dynamic_snitch': False})
+for node in cluster.nodelist():
+with open(os.path.join(node.get_conf_dir(), 
'cassandra-rackdc.properties'), 'w') as snitch_file:
+snitch_file.write("dc=datacenter1" + os.linesep)
+snitch_file.write("rack=rack1" + os.linesep)
+snitch_file.write("prefer_local=true" + os.linesep)
+
+cluster.start(wait_for_binary_proto=True)
 
 @since('3.0')
 def test_alter_rf_and_run_read_repair(self):
@@ -118,10 +131,13 @@ class TestReadRepair(Tester):
 :param session: Used to perform the schema setup & insert the data
 :return: a tuple containing the node which initially acts as the 
replica, and a list of the other two nodes
 """
-# Disable speculative retry to make it clear that we only query 
additional nodes because of read_repair_chance
+# Disable speculative retry and [dclocal]read_repair in initial setup.
 session.execute("""CREATE KEYSPACE alter_rf_test
WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};""")
-session.execute("CREATE TABLE alter_rf_test.t1 (k int PRIMARY KEY, a 
int, b int) WITH speculative_retry='NONE';")
+session.execute("""CREATE TABLE alter_rf_test.t1 (k int PRIMARY KEY, a 
int, b int)
+   WITH speculative_retry='NONE'
+   AND read_repair_chance=0
+   AND dclocal_read_repair_chance=0;""")
 session.execute("INSERT INTO alter_rf_test.t1 (k, a, b) VALUES (1, 1, 
1);")
 
 # identify the initial replica and trigger a flush to ensure reads 
come from sstables
@@ -184,7 +200,6 @@ class TestReadRepair(Tester):
 if res != expected:
 raise NotRepairedException()
 
-
 @since('2.0')
 def test_range_slice_query_with_tombstones(self):
 """


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14244) Some tests in read_repair_test are flakey

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387807#comment-16387807
 ] 

ASF GitHub Bot commented on CASSANDRA-14244:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra-dtest/pull/20


> Some tests in read_repair_test are flakey
> -
>
> Key: CASSANDRA-14244
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14244
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Major
>  Labels: dtest
>
> Since being refactored for CASSANDRA-14134, 
> {{test_alter_rf_and_run_read_repair}} and {{test_read_repair_chance}} in 
> {{read_repair_test.TestReadRepair}} are flakey and regularly fail on all 
> branches. The problem is that the inital setup of these two tests doesn't 
> explicitly set the {{read_repair_chance}} or {{dclocal_read_repair_chance}} 
> properties on the test table. As a consequence, read repairs are sometimes 
> probabilistically triggered and query results don't match the expectations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/3] cassandra git commit: Fix javadoc comment

2018-03-06 Thread spod
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5f31bb415 -> a7a36e703


Fix javadoc comment

>From my reading of the code, the double-negative appeared incorrect.

Closes #167


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/614aff11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/614aff11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/614aff11

Branch: refs/heads/trunk
Commit: 614aff11173b4a52ae2b0f7f7a7f2db84a8f7b75
Parents: 5f31bb4
Author: Ryan Scheidter 
Authored: Fri Oct 27 09:59:44 2017 -0500
Committer: Stefan Podkowinski 
Committed: Tue Mar 6 10:49:15 2018 +0100

--
 src/java/org/apache/cassandra/repair/StreamingRepairTask.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/614aff11/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
--
diff --git a/src/java/org/apache/cassandra/repair/StreamingRepairTask.java 
b/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
index 59fee0b..0122b31 100644
--- a/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
+++ b/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
@@ -40,7 +40,7 @@ import org.apache.cassandra.streaming.StreamState;
 import org.apache.cassandra.streaming.StreamOperation;
 
 /**
- * StreamingRepairTask performs data streaming between two remote replica 
which neither is not repair coordinator.
+ * StreamingRepairTask performs data streaming between two remote replicas, 
neither of which is repair coordinator.
  * Task will send {@link SyncComplete} message back to coordinator upon 
streaming completion.
  */
 public class StreamingRepairTask implements Runnable, StreamEventHandler


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Fix examples and formatting in native_protocol_v5.spec

2018-03-06 Thread spod
Fix examples and formatting in native_protocol_v5.spec

Closes #171


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ed5fc07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ed5fc07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ed5fc07

Branch: refs/heads/trunk
Commit: 9ed5fc07390637facb55f4ee0b8b917de93067b4
Parents: 614aff1
Author: Matthias Weiser 
Authored: Tue Nov 7 08:15:24 2017 +0100
Committer: Stefan Podkowinski 
Committed: Tue Mar 6 11:06:11 2018 +0100

--
 doc/native_protocol_v5.spec | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ed5fc07/doc/native_protocol_v5.spec
--
diff --git a/doc/native_protocol_v5.spec b/doc/native_protocol_v5.spec
index 7f6ba42..c15794c 100644
--- a/doc/native_protocol_v5.spec
+++ b/doc/native_protocol_v5.spec
@@ -93,8 +93,8 @@ Table of Contents
   it is moving. The rest of that byte is the protocol version (5 for the 
protocol
   defined in this document). In other words, for this version of the protocol,
   version will be one of:
-0x04Request frame for this protocol version
-0x84Response frame for this protocol version
+0x05Request frame for this protocol version
+0x85Response frame for this protocol version
 
   Please note that while every message ships with the version, only one version
   of messages is accepted on a given connection. In other words, the first 
message
@@ -409,13 +409,13 @@ Table of Contents
   Executes a prepared query. The body of the message must be:
   
   where
-  -  is the prepared query ID. It's the [short bytes] returned as a
-  response to a PREPARE message. As for , it has the 
exact
-  same definition as in QUERY (see Section 4.1.4).
+-  is the prepared query ID. It's the [short bytes] returned as a
+  response to a PREPARE message.
 -  is the ID of the resultset metadata that was sent
   along with response to PREPARE message. If a RESULT/Rows message reports
   changed resultset metadata with the Metadata_changed flag, the reported 
new
   resultset metadata must be used in subsequent executions.
+-  has the exact same definition as in QUERY (see 
Section 4.1.4).
 
 
 4.1.7. BATCH


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Docs: minor title and doc/README.md changes

2018-03-06 Thread spod
Docs: minor title and doc/README.md changes

Change title to just "Reporting Bugs"
Add sphinx instructions for Python 3.6 on Windows

Closes #202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7a36e70
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7a36e70
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7a36e70

Branch: refs/heads/trunk
Commit: a7a36e703c8b4a0a126b2f750e2dbe56440658ea
Parents: 9ed5fc0
Author: kbrotman <33668875+kbrot...@users.noreply.github.com>
Authored: Fri Mar 2 16:28:30 2018 +
Committer: Stefan Podkowinski 
Committed: Tue Mar 6 11:20:51 2018 +0100

--
 doc/README.md   | 7 ---
 doc/source/bugs.rst | 4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7a36e70/doc/README.md
--
diff --git a/doc/README.md b/doc/README.md
index 0badd8a..4d7dd05 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -20,15 +20,16 @@ the `source` subdirectory. The documentation uses 
[sphinx](http://www.sphinx-doc
 and is thus written in 
[reStructuredText](http://docutils.sourceforge.net/rst.html).
 
 To build the HTML documentation, you will need to first install sphinx and the
-[sphinx ReadTheDocs theme](the https://pypi.python.org/pypi/sphinx_rtd_theme), 
which
-on unix you can do with:
+[sphinx ReadTheDocs theme](the https://pypi.python.org/pypi/sphinx_rtd_theme).
+When using Python 3.6 on Windows, use `py -m pip install sphinx 
sphinx_rtd_theme`, on unix
+use:
 ```
 pip install sphinx sphinx_rtd_theme
 ```
 
 The documentation can then be built from this directory by calling `make html`
 (or `make.bat html` on windows). Alternatively, the top-level `ant gen-doc`
-target can be used.
+target can be used.  When using Python 3.6 on Windows, use `sphinx_build -b 
html source build`.
 
 To build the documentation with Docker Compose, run:
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7a36e70/doc/source/bugs.rst
--
diff --git a/doc/source/bugs.rst b/doc/source/bugs.rst
index 240cfd4..bd58a8f 100644
--- a/doc/source/bugs.rst
+++ b/doc/source/bugs.rst
@@ -14,8 +14,8 @@
 .. See the License for the specific language governing permissions and
 .. limitations under the License.
 
-Reporting Bugs and Contributing
-===
+Reporting Bugs
+==
 
 If you encounter a problem with Cassandra, the first places to ask for help 
are the :ref:`user mailing list
 ` and the ``#cassandra`` :ref:`IRC channel `.


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387566#comment-16387566
 ] 

ASF GitHub Bot commented on CASSANDRA-11381:


Github user michaelsembwever commented on the issue:

https://github.com/apache/cassandra-dtest/pull/19
  
@ptnapoleon ping


> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 2.2.10, 3.0.14, 3.11.0, 4.0
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention

2018-03-06 Thread Oleksandr Shulgin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387545#comment-16387545
 ] 

Oleksandr Shulgin commented on CASSANDRA-5836:
--

{quote}Nobody really seems to understand why it's not safe for a seed node to 
bootstrap{quote}

I think the reasoning here is as follows: if seed nodes were allowed to 
bootstrap, the very first seed node would need some mechanism to opt out (or it 
will never be able to start).   Such a mechanism is dangerous to have in the 
first place, because it can fail and then you will allow a node to skip 
bootstrap then it must not skip it.

But the above only holds in the context of the default {{auto_bootstrap=true}} 
setting.  If we require that it is set to {{false}} when deploying new 
clusters/DCs, the problem goes away and we don't need a special case for the 
very first node.

This way we also don't need the special case of seed nodes at all: the 
bootstrap behavior can be controlled entirely by the {{auto_bootstrap}} 
parameter value, regardless of the node's seed status.  It then becomes *safer* 
to bootstrap a seed node, than not, as with any other node, because it performs 
more checks which can detect configuration problems and doesn't accept client 
reads before it has fully joined the ring.

{quote}It's worth noting here that there is the case of SimpleStrategy in which 
you wouldn't want auto_bootstrap=false (this affects auth, traces, 
system_distributed). This is specifically why you would want every node to 
bootstrap in a new DC (including seeds). The alternative is to get rid of 
SimpleStrategy (or at least stop using it as a default).{quote}

The only case I see when {{SimpleStrategy}} can actually work with multiple DCs 
is when you start multi-DC from scratch.  The auth keyspace you will want to 
change to use NTS and replicate to all DCs, but you might not care about the 
other two non-local system keyspaces.

But are you referring here to a case where you would add a new DC to a cluster 
with data already in the original DC and still using {{SimpleStrategy}}, 
[~KurtG]?  To me that doesn't seem to be practical.  Even if your data set is 
so small that you can bootstrap the first node of the new DC w/o running out of 
disk space, how do you make new seeds bootstrap?  Or do you suggest to add all 
nodes as non-seeds first and then do a rolling restart to indicate the seeds?  
Any reason why would you want to go this way instead of the proper {{nodetool 
rebuild}}?


> Seed nodes should be able to bootstrap without manual intervention
> --
>
> Key: CASSANDRA-5836
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5836
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bill Hathaway
>Priority: Minor
>
> The current logic doesn't allow a seed node to be bootstrapped.  If a user 
> wants to bootstrap a node configured as a seed (for example to replace a seed 
> node via replace_token), they first need to remove the node's own IP from the 
> seed list, and then start the bootstrap process.  This seems like an 
> unnecessary step since a node never uses itself as a seed.
> I think it would be a better experience if the logic was changed to allow a 
> seed node to bootstrap without manual intervention when there are other seed 
> nodes up in a ring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14233) nodetool tablestats/cfstats output has inconsistent formatting for latency

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387529#comment-16387529
 ] 

ASF GitHub Bot commented on CASSANDRA-14233:


Github user spodkowinski commented on the issue:

https://github.com/apache/cassandra/pull/195
  
Please close the PR as the patch already has been merged in 
d10e6ac606c6b484c75bb850de7a754b75ad5eca


> nodetool tablestats/cfstats output has inconsistent formatting for latency
> --
>
> Key: CASSANDRA-14233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14233
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Samuel Roberts
>Assignee: Samuel Roberts
>Priority: Trivial
> Fix For: 3.11.2, 4.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Latencies are reported at keyspace level with `ms.` and at table level with 
> `ms`.
> There should be no trailing `.` as it is not a sentence and `.` is not part 
> of the abbreviation.
> This is also present in 2.x with `nodetool cfstats`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7423) Allow updating individual subfields of UDT

2018-03-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387459#comment-16387459
 ] 

Benjamin Lerer commented on CASSANDRA-7423:
---

Can you create a separate ticket?

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>Priority: Major
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org