[jira] [Created] (CASSANDRA-13377) test failure in org.apache.cassandra.service.RemoveTest.testLocalHostId-compression
Michael Shuler created CASSANDRA-13377: -- Summary: test failure in org.apache.cassandra.service.RemoveTest.testLocalHostId-compression Key: CASSANDRA-13377 URL: https://issues.apache.org/jira/browse/CASSANDRA-13377 Project: Cassandra Issue Type: Bug Reporter: Michael Shuler Attachments: jenkins-cassandra-3.11_testall-124_logs.tar.gz example failure: http://cassci.datastax.com/job/cassandra-3.11_testall/124/testReport/org.apache.cassandra.service/RemoveTest/testLocalHostId_compression {noformat} Stacktrace java.lang.NullPointerException at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:881) at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:876) at org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2275) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1928) at org.apache.cassandra.Util.createInitialRing(Util.java:222) at org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:89) Standard Output ERROR [main] 2017-03-23 21:27:05,889 SubstituteLogger.java:250 - SLF4J: stderr INFO [main] 2017-03-23 21:27:06,238 YamlConfigurationLoader.java:89 - Configuration location: file:/home/automaton/cassandra/test/conf/cassandra.yaml DEBUG [main] 2017-03-23 21:27:06,241 YamlConfigurationLoader.java:108 - Loading settings from file:/home/automaton/cassandra/test/conf/cassandra.yaml INFO [main] 2017-03-23 21:27:07,506 Config.java:475 - Node configuration:[allocate_tokens_for_keyspace=null; authentica ...[truncated 176636 chars]... ain] 2017-03-23 21:27:16,054 YamlConfigurationLoader.java:108 - Loading settings from file:/home/automaton/cassandra/test/conf/cassandra.yaml DEBUG [main] 2017-03-23 21:27:16,059 StorageService.java:2171 - Node /127.0.0.5 state bootstrapping, token [31359799266797610263756179790339965311] INFO [main] 2017-03-23 21:27:16,059 StorageService.java:2184 - Node /127.0.0.5 state jump to bootstrap INFO [main] 2017-03-23 21:27:16,059 MessagingService.java:979 - Waiting for messaging service to quiesce {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (CASSANDRA-13376) test failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_3_x_To_indev_3_x.rolling_upgrade_with_internode_ssl_test
Michael Shuler created CASSANDRA-13376: -- Summary: test failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_3_x_To_indev_3_x.rolling_upgrade_with_internode_ssl_test Key: CASSANDRA-13376 URL: https://issues.apache.org/jira/browse/CASSANDRA-13376 Project: Cassandra Issue Type: Bug Reporter: Michael Shuler Attachments: node1_debug.log, node1_gc.log, node1.log, node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, node3.log example failure: http://cassci.datastax.com/job/cassandra-3.11_large_dtest/25/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_3_x_To_indev_3_x/rolling_upgrade_with_internode_ssl_test {noformat} Error Message Ran out of time waiting for queue size (1) to be 'le' to 0. Aborting. Stacktrace File "/usr/lib/python2.7/unittest/case.py", line 329, in run testMethod() File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py", line 291, in rolling_upgrade_with_internode_ssl_test self.upgrade_scenario(rolling=True, internode_ssl=True) File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py", line 356, in upgrade_scenario self._wait_until_queue_condition('writes pending verification', verification_queue, operator.le, 0, max_wait_s=1200) File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py", line 541, in _wait_until_queue_condition raise RuntimeError("Ran out of time waiting for queue size ({}) to be '{}' to {}. Aborting.".format(qsize, opfunc.__name__, required_len)) "Ran out of time waiting for queue size (1) to be 'le' to 0. Aborting. {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941214#comment-15941214 ] Ariel Weisberg commented on CASSANDRA-13324: ||code|utests|dtests|| |[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13324-trunk-2?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-2-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-2-dtest/1/]| > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.
[ https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nachiket Patil updated CASSANDRA-13369: --- Fix Version/s: (was: 4.0) Status: Patch Available (was: Open) > If there are multiple values for a key, CQL grammar choses last value. This > should not be silent or should not be allowed. > -- > > Key: CASSANDRA-13369 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13369 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Attachments: 3.X.diff, trunk.diff > > > If through CQL, multiple values are specified for a key, grammar parses the > map and last value for the key wins. This behavior is bad. > e.g. > {code} > CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': > 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5}; > {code} > Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even > result in loss of data. This behavior should not be silent or not be allowed > at all. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.
[ https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nachiket Patil updated CASSANDRA-13369: --- Attachment: 3.X.diff > If there are multiple values for a key, CQL grammar choses last value. This > should not be silent or should not be allowed. > -- > > Key: CASSANDRA-13369 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13369 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Fix For: 4.0 > > Attachments: 3.X.diff, trunk.diff > > > If through CQL, multiple values are specified for a key, grammar parses the > map and last value for the key wins. This behavior is bad. > e.g. > {code} > CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': > 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5}; > {code} > Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even > result in loss of data. This behavior should not be silent or not be allowed > at all. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.
[ https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nachiket Patil updated CASSANDRA-13369: --- Attachment: trunk.diff > If there are multiple values for a key, CQL grammar choses last value. This > should not be silent or should not be allowed. > -- > > Key: CASSANDRA-13369 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13369 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Fix For: 4.0 > > Attachments: trunk.diff > > > If through CQL, multiple values are specified for a key, grammar parses the > map and last value for the key wins. This behavior is bad. > e.g. > {code} > CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': > 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5}; > {code} > Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even > result in loss of data. This behavior should not be silent or not be allowed > at all. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Reopened] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg reopened CASSANDRA-13324: {{MessagineService.getConnectionPool()}} can now return null and there are callers not checking for null. > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941051#comment-15941051 ] Ben Bromhead edited comment on CASSANDRA-11471 at 3/24/17 8:06 PM: --- Sorry for the delay, the joy of a new baby :) Addressed all the comments except one. bq. Only if encryption is optional? Basically because the authenticator can only work if the certificates are there? It seems like this can NPE? Currently getSaslNegotiator will try to get the certificate chain from the channel if client encryption is enabled and connecting on an encrypted session is not optional. This means null instead of a certificate chain will be passed in when getting the new SASL authenticator. I couldn't think of a nice way to pass the certificate chain to the authenticator but still respect the fact there are authenticators that just don't care about them. Originally my thinking was that Optional does not appear to be used in the project and I didn't want to add even more more methods to IAuthenticator. Thinking about it again, it probably just makes sense to overload newV5SaslNegotiator and not have to pass in certificates, which would reduce the chance of someone implementing a new Authenticator getting an NPE. ||4.0|| |[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]| was (Author: benbromhead): Sorry for the delay, the joy of a new baby :) Addressed all the comments except one. bq. Only if encryption is optional? Basically because the authenticator can only work if the certificates are there? It seems like this can NPE? Currently getSaslNegotiator will try to get the certificate chain from the channel if client encryption is enabled and connecting on an encrypted session is not optional. This means null instead of a certificate chain will be passed in when getting the new SASL authenticator. I couldn't think of a nice way to pass the certificate chain to the authenticator but still respect the fact there are authenticators that just don't care about them. Given that Optional does not appear to be used in the project and I didn't want to add even more more methods to IAuthenticator. Thinking about it again, it probably just makes sense to overload newV5SaslNegotiator and not have to pass in certificates, which would reduce the chance of someone implementing a new Authenticator getting an NPE. ||4.0|| |[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]| > Add SASL mechanism negotiation to the native protocol > - > > Key: CASSANDRA-11471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11471 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Sam Tunnicliffe >Assignee: Ben Bromhead > Labels: client-impacting > Attachments: CASSANDRA-11471 > > > Introducing an additional message exchange into the authentication sequence > would allow us to support multiple authentication schemes and [negotiation of > SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. > The current {{AUTHENTICATE}} message sent from Client to Server includes the > java classname of the configured {{IAuthenticator}}. This could be superceded > by a new message which lists the SASL mechanisms supported by the server. The > client would then respond with a new message which indicates it's choice of > mechanism. This would allow the server to support multiple mechanisms, for > example enabling both {{PLAIN}} for username/password authentication and > {{EXTERNAL}} for a mechanism for extracting credentials from SSL > certificates\* (see the example in > [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the > server could tailor the list of supported mechanisms on a per-connection > basis, e.g. only offering certificate based auth to encrypted clients. > The client's response should include the selected mechanism and any initial > response data. This is mechanism-specific; the {{PLAIN}} mechanism consists > of a single round in which the client sends encoded credentials as the > initial response data and the server response indicates either success or > failure with no futher challenges required. > From a protocol perspective, after the mechanism negotiation the exchange > would continue as in protocol v4, with one or more rounds of > {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an > {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or > an {{ERROR}} on auth failure. > XMPP performs mechanism negotiation in this way, > [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good > overview. > \* Note: this would require some a priori agreement
[jira] [Comment Edited] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941051#comment-15941051 ] Ben Bromhead edited comment on CASSANDRA-11471 at 3/24/17 8:05 PM: --- Sorry for the delay, the joy of a new baby :) Addressed all the comments except one. bq. Only if encryption is optional? Basically because the authenticator can only work if the certificates are there? It seems like this can NPE? Currently getSaslNegotiator will try to get the certificate chain from the channel if client encryption is enabled and connecting on an encrypted session is not optional. This means null instead of a certificate chain will be passed in when getting the new SASL authenticator. I couldn't think of a nice way to pass the certificate chain to the authenticator but still respect the fact there are authenticators that just don't care about them. Given that Optional does not appear to be used in the project and I didn't want to add even more more methods to IAuthenticator. Thinking about it again, it probably just makes sense to overload newV5SaslNegotiator and not have to pass in certificates, which would reduce the chance of someone implementing a new Authenticator getting an NPE. ||4.0|| |[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]| was (Author: benbromhead): Addressed all the comments except one. bq. Only if encryption is optional? Basically because the authenticator can only work if the certificates are there? It seems like this can NPE? Currently getSaslNegotiator will try to get the certificate chain from the channel if client encryption is enabled and connecting on an encrypted session is not optional. This means null instead of a certificate chain will be passed in when getting the new SASL authenticator. I couldn't think of a nice way to pass the certificate chain to the authenticator but still respect the fact there are authenticators that just don't care about them. Given that Optional does not appear to be used in the project and I didn't want to add even more more methods to IAuthenticator. Thinking about it again, it probably just makes sense to overload newV5SaslNegotiator and not have to pass in certificates, which would reduce the chance of someone implementing a new Authenticator getting an NPE. ||4.0|| |[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]| > Add SASL mechanism negotiation to the native protocol > - > > Key: CASSANDRA-11471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11471 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Sam Tunnicliffe >Assignee: Ben Bromhead > Labels: client-impacting > Attachments: CASSANDRA-11471 > > > Introducing an additional message exchange into the authentication sequence > would allow us to support multiple authentication schemes and [negotiation of > SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. > The current {{AUTHENTICATE}} message sent from Client to Server includes the > java classname of the configured {{IAuthenticator}}. This could be superceded > by a new message which lists the SASL mechanisms supported by the server. The > client would then respond with a new message which indicates it's choice of > mechanism. This would allow the server to support multiple mechanisms, for > example enabling both {{PLAIN}} for username/password authentication and > {{EXTERNAL}} for a mechanism for extracting credentials from SSL > certificates\* (see the example in > [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the > server could tailor the list of supported mechanisms on a per-connection > basis, e.g. only offering certificate based auth to encrypted clients. > The client's response should include the selected mechanism and any initial > response data. This is mechanism-specific; the {{PLAIN}} mechanism consists > of a single round in which the client sends encoded credentials as the > initial response data and the server response indicates either success or > failure with no futher challenges required. > From a protocol perspective, after the mechanism negotiation the exchange > would continue as in protocol v4, with one or more rounds of > {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an > {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or > an {{ERROR}} on auth failure. > XMPP performs mechanism negotiation in this way, > [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good > overview. > \* Note: this would require some a priori agreement between client and server > over the implementation of the
[jira] [Commented] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941051#comment-15941051 ] Ben Bromhead commented on CASSANDRA-11471: -- Addressed all the comments except one. bq. Only if encryption is optional? Basically because the authenticator can only work if the certificates are there? It seems like this can NPE? Currently getSaslNegotiator will try to get the certificate chain from the channel if client encryption is enabled and connecting on an encrypted session is not optional. This means null instead of a certificate chain will be passed in when getting the new SASL authenticator. I couldn't think of a nice way to pass the certificate chain to the authenticator but still respect the fact there are authenticators that just don't care about them. Given that Optional does not appear to be used in the project and I didn't want to add even more more methods to IAuthenticator. Thinking about it again, it probably just makes sense to overload newV5SaslNegotiator and not have to pass in certificates, which would reduce the chance of someone implementing a new Authenticator getting an NPE. ||4.0|| |[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]| > Add SASL mechanism negotiation to the native protocol > - > > Key: CASSANDRA-11471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11471 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Sam Tunnicliffe >Assignee: Ben Bromhead > Labels: client-impacting > Attachments: CASSANDRA-11471 > > > Introducing an additional message exchange into the authentication sequence > would allow us to support multiple authentication schemes and [negotiation of > SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. > The current {{AUTHENTICATE}} message sent from Client to Server includes the > java classname of the configured {{IAuthenticator}}. This could be superceded > by a new message which lists the SASL mechanisms supported by the server. The > client would then respond with a new message which indicates it's choice of > mechanism. This would allow the server to support multiple mechanisms, for > example enabling both {{PLAIN}} for username/password authentication and > {{EXTERNAL}} for a mechanism for extracting credentials from SSL > certificates\* (see the example in > [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the > server could tailor the list of supported mechanisms on a per-connection > basis, e.g. only offering certificate based auth to encrypted clients. > The client's response should include the selected mechanism and any initial > response data. This is mechanism-specific; the {{PLAIN}} mechanism consists > of a single round in which the client sends encoded credentials as the > initial response data and the server response indicates either success or > failure with no futher challenges required. > From a protocol perspective, after the mechanism negotiation the exchange > would continue as in protocol v4, with one or more rounds of > {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an > {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or > an {{ERROR}} on auth failure. > XMPP performs mechanism negotiation in this way, > [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good > overview. > \* Note: this would require some a priori agreement between client and server > over the implementation of the {{EXTERNAL}} mechanism. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13324: --- Resolution: Fixed Status: Resolved (was: Ready to Commit) Committed as [732d1af866b91e5ba63e7e2a467d99d4cb90e11f|https://github.com/apache/cassandra/commit/732d1af866b91e5ba63e7e2a467d99d4cb90e11f] > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13324: --- Status: Ready to Commit (was: Patch Available) > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
cassandra git commit: Outbound TCP connections should consult internode authenticator. Patch by Ariel Weisberg; Reviewed by Marcus Eriksson for CASSANDRA-13324
Repository: cassandra Updated Branches: refs/heads/trunk 60e2e9826 -> 732d1af86 Outbound TCP connections should consult internode authenticator. Patch by Ariel Weisberg; Reviewed by Marcus Eriksson for CASSANDRA-13324 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/732d1af8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/732d1af8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/732d1af8 Branch: refs/heads/trunk Commit: 732d1af866b91e5ba63e7e2a467d99d4cb90e11f Parents: 60e2e98 Author: Ariel WeisbergAuthored: Fri Mar 24 15:26:50 2017 -0400 Committer: Ariel Weisberg Committed: Fri Mar 24 15:26:50 2017 -0400 -- CHANGES.txt | 1 + .../org/apache/cassandra/auth/AuthConfig.java | 10 +--- .../cassandra/config/DatabaseDescriptor.java| 5 +- .../locator/ReconnectableSnitchHelper.java | 21 +-- .../apache/cassandra/net/MessagingService.java | 44 -- .../cassandra/net/OutboundTcpConnection.java| 33 +++--- .../net/OutboundTcpConnectionPool.java | 9 ++- .../config/DatabaseDescriptorRefTest.java | 1 + .../locator/ReconnectableSnitchHelperTest.java | 63 .../cassandra/net/MessagingServiceTest.java | 60 +++ 10 files changed, 218 insertions(+), 29 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/732d1af8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index fb9b8c4..b42bde6 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324) * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) * Incremental repair not streaming correct sstables (CASSANDRA-13328) http://git-wip-us.apache.org/repos/asf/cassandra/blob/732d1af8/src/java/org/apache/cassandra/auth/AuthConfig.java -- diff --git a/src/java/org/apache/cassandra/auth/AuthConfig.java b/src/java/org/apache/cassandra/auth/AuthConfig.java index c389ae4..2ca1522 100644 --- a/src/java/org/apache/cassandra/auth/AuthConfig.java +++ b/src/java/org/apache/cassandra/auth/AuthConfig.java @@ -25,6 +25,7 @@ import org.apache.cassandra.config.Config; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.utils.FBUtilities; +import org.hsqldb.Database; /** * Only purpose is to Initialize authentication/authorization via {@link #applyAuth()}. @@ -94,13 +95,8 @@ public final class AuthConfig // authenticator -IInternodeAuthenticator internodeAuthenticator; if (conf.internode_authenticator != null) -internodeAuthenticator = FBUtilities.construct(conf.internode_authenticator, "internode_authenticator"); -else -internodeAuthenticator = new AllowAllInternodeAuthenticator(); - -DatabaseDescriptor.setInternodeAuthenticator(internodeAuthenticator); + DatabaseDescriptor.setInternodeAuthenticator(FBUtilities.construct(conf.internode_authenticator, "internode_authenticator")); // Validate at last to have authenticator, authorizer, role-manager and internode-auth setup // in case these rely on each other. @@ -108,6 +104,6 @@ public final class AuthConfig authenticator.validateConfiguration(); authorizer.validateConfiguration(); roleManager.validateConfiguration(); -internodeAuthenticator.validateConfiguration(); +DatabaseDescriptor.getInternodeAuthenticator().validateConfiguration(); } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/732d1af8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 4fb742c..465cd8a 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -29,6 +29,7 @@ import java.nio.file.Paths; import java.util.*; import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Preconditions; import com.google.common.collect.ImmutableSet; import com.google.common.primitives.Ints; import com.google.common.primitives.Longs; @@ -36,6 +37,7 @@ import com.google.common.primitives.Longs; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import
[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13370: --- Resolution: Fixed Status: Resolved (was: Ready to Commit) > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940785#comment-15940785 ] Ariel Weisberg commented on CASSANDRA-13370: Committed as [ee7023e324cdd3b3442b04ad4b0b1f4b33921d35|https://github.com/apache/cassandra/commit/ee7023e324cdd3b3442b04ad4b0b1f4b33921d35] > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[1/3] cassandra git commit: Avoid seeding /dev/urandom on OS X by specifying SHA1PRNG in CipherFactoryTest.
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 a10b8079e -> ee7023e32 refs/heads/trunk 3048608c6 -> 60e2e9826 Avoid seeding /dev/urandom on OS X by specifying SHA1PRNG in CipherFactoryTest. Patch by Jay Zhuang; Reviewed by Ariel Weisberg for CASSANDRA-13370 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee7023e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee7023e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee7023e3 Branch: refs/heads/cassandra-3.11 Commit: ee7023e324cdd3b3442b04ad4b0b1f4b33921d35 Parents: a10b807 Author: Jay ZhuangAuthored: Fri Mar 24 13:08:50 2017 -0400 Committer: Ariel Weisberg Committed: Fri Mar 24 13:08:50 2017 -0400 -- CHANGES.txt| 1 + .../cassandra/security/CipherFactoryTest.java | 17 - 2 files changed, 17 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7023e3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8b13109..071dd1a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * unittest CipherFactoryTest failed on MacOS (CASSANDRA-13370) * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7023e3/test/unit/org/apache/cassandra/security/CipherFactoryTest.java -- diff --git a/test/unit/org/apache/cassandra/security/CipherFactoryTest.java b/test/unit/org/apache/cassandra/security/CipherFactoryTest.java index 4ba265e..29302b7 100644 --- a/test/unit/org/apache/cassandra/security/CipherFactoryTest.java +++ b/test/unit/org/apache/cassandra/security/CipherFactoryTest.java @@ -21,6 +21,7 @@ package org.apache.cassandra.security; import java.io.IOException; +import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.BadPaddingException; @@ -34,6 +35,9 @@ import org.junit.Test; import org.apache.cassandra.config.TransparentDataEncryptionOptions; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.fail; + public class CipherFactoryTest { // http://www.gutenberg.org/files/4300/4300-h/4300-h.htm @@ -47,7 +51,18 @@ public class CipherFactoryTest @Before public void setup() { -secureRandom = new SecureRandom(new byte[] {0,1,2,3,4,5,6,7,8,9} ); +try +{ +secureRandom = SecureRandom.getInstance("SHA1PRNG"); +assertNotNull(secureRandom.getProvider()); +} +catch (NoSuchAlgorithmException e) +{ +fail("NoSuchAlgorithmException: SHA1PRNG not found."); +} +long seed = new java.util.Random().nextLong(); +System.out.println("Seed: " + seed); +secureRandom.setSeed(seed); encryptionOptions = EncryptionContextGenerator.createEncryptionOptions(); cipherFactory = new CipherFactory(encryptionOptions); }
[jira] [Commented] (CASSANDRA-13077) Add helper comments for unittest debug
[ https://issues.apache.org/jira/browse/CASSANDRA-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940763#comment-15940763 ] Jay Zhuang commented on CASSANDRA-13077: Hi Stefan, there could be tests only failed remotely, for example, CASSANDRA-12453, CASSANDRA-13151, but we may want to debug remotely with local IDE. Also there maybe tests only failed on Windows system for example. > Add helper comments for unittest debug > -- > > Key: CASSANDRA-13077 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13077 > Project: Cassandra > Issue Type: Improvement > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > Fix For: 3.0.x > > Attachments: 13077-3.0.txt > > > Not an issue. Just add comments for future unittest debug. I find it useful, > hope it could be merged: > [13077-3.0.patch|https://github.com/cooldoger/cassandra/commit/91c98248a7c3427a22b564fc9e86691e7a923b95] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60e2e982 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60e2e982 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60e2e982 Branch: refs/heads/trunk Commit: 60e2e982656e9ba495107ffe8f338223e1196b4a Parents: 3048608 ee7023e Author: Ariel WeisbergAuthored: Fri Mar 24 13:12:05 2017 -0400 Committer: Ariel Weisberg Committed: Fri Mar 24 13:12:05 2017 -0400 -- CHANGES.txt| 1 + .../cassandra/security/CipherFactoryTest.java | 17 - 2 files changed, 17 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/60e2e982/CHANGES.txt -- diff --cc CHANGES.txt index 1eab15b,071dd1a..fb9b8c4 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,51 -1,5 +1,52 @@@ +4.0 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) + * Incremental repair not streaming correct sstables (CASSANDRA-13328) + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132) + * Remove config option index_interval (CASSANDRA-10671) + * Reduce lock contention for collection types and serializers (CASSANDRA-13271) + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283) + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292) + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520) + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226) + * Remove unused method (CASSANDRA-13227) + * Fix minor bugs related to #9143 (CASSANDRA-13217) + * Output warning if user increases RF (CASSANDRA-13079) + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081) + * Add support for + and - operations on dates (CASSANDRA-11936) + * Fix consistency of incrementally repaired data (CASSANDRA-9143) + * Increase commitlog version (CASSANDRA-13161) + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Add histogram for delay to deliver hints (CASSANDRA-13234) + + 3.11.0 + * unittest CipherFactoryTest failed on MacOS (CASSANDRA-13370) * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366)
[2/3] cassandra git commit: Avoid seeding /dev/urandom on OS X by specifying SHA1PRNG in CipherFactoryTest.
Avoid seeding /dev/urandom on OS X by specifying SHA1PRNG in CipherFactoryTest. Patch by Jay Zhuang; Reviewed by Ariel Weisberg for CASSANDRA-13370 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee7023e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee7023e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee7023e3 Branch: refs/heads/trunk Commit: ee7023e324cdd3b3442b04ad4b0b1f4b33921d35 Parents: a10b807 Author: Jay ZhuangAuthored: Fri Mar 24 13:08:50 2017 -0400 Committer: Ariel Weisberg Committed: Fri Mar 24 13:08:50 2017 -0400 -- CHANGES.txt| 1 + .../cassandra/security/CipherFactoryTest.java | 17 - 2 files changed, 17 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7023e3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8b13109..071dd1a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * unittest CipherFactoryTest failed on MacOS (CASSANDRA-13370) * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee7023e3/test/unit/org/apache/cassandra/security/CipherFactoryTest.java -- diff --git a/test/unit/org/apache/cassandra/security/CipherFactoryTest.java b/test/unit/org/apache/cassandra/security/CipherFactoryTest.java index 4ba265e..29302b7 100644 --- a/test/unit/org/apache/cassandra/security/CipherFactoryTest.java +++ b/test/unit/org/apache/cassandra/security/CipherFactoryTest.java @@ -21,6 +21,7 @@ package org.apache.cassandra.security; import java.io.IOException; +import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.BadPaddingException; @@ -34,6 +35,9 @@ import org.junit.Test; import org.apache.cassandra.config.TransparentDataEncryptionOptions; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.fail; + public class CipherFactoryTest { // http://www.gutenberg.org/files/4300/4300-h/4300-h.htm @@ -47,7 +51,18 @@ public class CipherFactoryTest @Before public void setup() { -secureRandom = new SecureRandom(new byte[] {0,1,2,3,4,5,6,7,8,9} ); +try +{ +secureRandom = SecureRandom.getInstance("SHA1PRNG"); +assertNotNull(secureRandom.getProvider()); +} +catch (NoSuchAlgorithmException e) +{ +fail("NoSuchAlgorithmException: SHA1PRNG not found."); +} +long seed = new java.util.Random().nextLong(); +System.out.println("Seed: " + seed); +secureRandom.setSeed(seed); encryptionOptions = EncryptionContextGenerator.createEncryptionOptions(); cipherFactory = new CipherFactory(encryptionOptions); }
[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13370: --- Status: Ready to Commit (was: Patch Available) > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940739#comment-15940739 ] Jay Zhuang commented on CASSANDRA-13370: [~aweisberg] yes, sure. Thanks for improving that. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13289: --- Status: Patch Available (was: In Progress) > Make it possible to monitor an ideal consistency level separate from actual > consistency level > - > > Key: CASSANDRA-13289 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13289 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > Fix For: 4.0 > > > As an operator there are several issues related to multi-datacenter > replication and consistency you may want to have more information on from > your production database. > For instance. If your application writes at LOCAL_QUORUM how often are those > writes failing to achieve EACH_QUORUM at other data centers. If you failed > your application over to one of those data centers roughly how inconsistent > might it be given the number of writes that didn't propagate since the last > incremental repair? > You might also want to know roughly what the latency of writes would be if > you switched to a different consistency level. For instance you are writing > at LOCAL_QUORUM and want to know what would happen if you switched to > EACH_QUORUM. > The proposed change is to allow an ideal_consistency_level to be specified in > cassandra.yaml as well as get/set via JMX. If no ideal consistency level is > specified no additional tracking is done. > if an ideal consistency level is specified then the > {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler > that tracks whether the ideal consistency level is met before a write times > out. It also tracks the latency for achieving the ideal CL of successful > writes. > These two metrics would be reported on a per keyspace basis. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940724#comment-15940724 ] Ariel Weisberg commented on CASSANDRA-13370: [~jay.zhuang] does this work for you? I don't want to rewrite your patch without running it by you. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928274#comment-15928274 ] Ariel Weisberg edited comment on CASSANDRA-13324 at 3/24/17 4:55 PM: - ||code|utests|dtests|| |[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13324-trunk?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-testall/6/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-dtest/5/]| was (Author: aweisberg): ||code|utests|dtests|| |[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13324-trunk?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-testall/5/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13324-trunk-dtest/5/]| > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13197) +=/-= shortcut syntax bugs/inconsistencies
[ https://issues.apache.org/jira/browse/CASSANDRA-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13197: Reviewer: Andrés de la Peña Status: Open (was: Patch Available) > +=/-= shortcut syntax bugs/inconsistencies > -- > > Key: CASSANDRA-13197 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13197 > Project: Cassandra > Issue Type: Bug >Reporter: Kishan Karunaratne >Assignee: Alex Petrov > > CASSANDRA-12232 introduced (+=/-=) shortcuts for counters and collection > types. I ran into some bugs/consistencies. > Given the schema: > {noformat} > CREATE TABLE simplex.collection_table (k int PRIMARY KEY, d_l List, d_s > Set, d_m Map, d_t Tuple); > {noformat} > 1) Using -= on a list column removes all elements that match the value, > instead of the first or last occurrence of it. Is this expected? > {noformat} > Given d_l = [0, 1, 2, 1, 1] > UPDATE collection_table SET d_l -= [1] WHERE k=0; > yields > [0, 2] > {noformat} > 2) I can't seem to remove a map key/value pair: > {noformat} > Given d_m = {0: 0, 1: 1} > UPDATE collection_table SET d_m -= {1:1} WHERE k=0; > yields > Invalid map literal for d_m of type frozen > {noformat} > However {noformat}UPDATE collection_table SET d_m -= {1} WHERE k=0;{noformat} > does work. > 3) Tuples are immutable so it make sense that +=/-= doesn't apply. However > the error message could be better, now that other collection types are > allowed: > {noformat} > UPDATE collection_table SET d_t += (1) WHERE k=0; > yields > Invalid operation (d_t = d_t + (1)) for non counter column d_t > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13197) +=/-= shortcut syntax bugs/inconsistencies
[ https://issues.apache.org/jira/browse/CASSANDRA-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13197: Status: Patch Available (was: Open) > +=/-= shortcut syntax bugs/inconsistencies > -- > > Key: CASSANDRA-13197 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13197 > Project: Cassandra > Issue Type: Bug >Reporter: Kishan Karunaratne >Assignee: Alex Petrov > > CASSANDRA-12232 introduced (+=/-=) shortcuts for counters and collection > types. I ran into some bugs/consistencies. > Given the schema: > {noformat} > CREATE TABLE simplex.collection_table (k int PRIMARY KEY, d_l List, d_s > Set, d_m Map, d_t Tuple); > {noformat} > 1) Using -= on a list column removes all elements that match the value, > instead of the first or last occurrence of it. Is this expected? > {noformat} > Given d_l = [0, 1, 2, 1, 1] > UPDATE collection_table SET d_l -= [1] WHERE k=0; > yields > [0, 2] > {noformat} > 2) I can't seem to remove a map key/value pair: > {noformat} > Given d_m = {0: 0, 1: 1} > UPDATE collection_table SET d_m -= {1:1} WHERE k=0; > yields > Invalid map literal for d_m of type frozen > {noformat} > However {noformat}UPDATE collection_table SET d_m -= {1} WHERE k=0;{noformat} > does work. > 3) Tuples are immutable so it make sense that +=/-= doesn't apply. However > the error message could be better, now that other collection types are > allowed: > {noformat} > UPDATE collection_table SET d_t += (1) WHERE k=0; > yields > Invalid operation (d_t = d_t + (1)) for non counter column d_t > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13337) Dropping column results in "corrupt" SSTable
[ https://issues.apache.org/jira/browse/CASSANDRA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940589#comment-15940589 ] Alex Petrov commented on CASSANDRA-13337: - Sorry for the delay, I have somehow missed the notification. The new patch looks great, thanks for taking care of it! +1 > Dropping column results in "corrupt" SSTable > > > Key: CASSANDRA-13337 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13337 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jonas Borgström >Assignee: Sylvain Lebresne > Fix For: 3.0.x, 3.11.x > > > It seems like dropping a column can make SSTables containing rows with writes > to only the dropped column will become uncompactable. > Also Cassandra <= 3.9 and <= 3.0.11 will even refuse to start with the same > stack trace > {code} > cqlsh -e "create keyspace test with replication = { 'class' : > 'SimpleStrategy', 'replication_factor' : 1 }" > cqlsh -e "create table test.test(pk text primary key, x text, y text)" > cqlsh -e "update test.test set x='1' where pk='1'" > nodetool flush > cqlsh -e "update test.test set x='1', y='1' where pk='1'" > nodetool flush > cqlsh -e "alter table test.test drop x" > nodetool compact test test > error: Corrupt empty row found in unfiltered partition > -- StackTrace -- > java.io.IOException: Corrupt empty row found in unfiltered partition > at > org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:382) > at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:87) > at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:65) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.doCompute(SSTableIdentityIterator.java:123) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95) > at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:509) > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:369) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) > at > org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58) > at > org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:67) > at > org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:26) > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > at > org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:227) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:190) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > at > org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:610) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Updated] (CASSANDRA-13337) Dropping column results in "corrupt" SSTable
[ https://issues.apache.org/jira/browse/CASSANDRA-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13337: Status: Ready to Commit (was: Patch Available) > Dropping column results in "corrupt" SSTable > > > Key: CASSANDRA-13337 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13337 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jonas Borgström >Assignee: Sylvain Lebresne > Fix For: 3.0.x, 3.11.x > > > It seems like dropping a column can make SSTables containing rows with writes > to only the dropped column will become uncompactable. > Also Cassandra <= 3.9 and <= 3.0.11 will even refuse to start with the same > stack trace > {code} > cqlsh -e "create keyspace test with replication = { 'class' : > 'SimpleStrategy', 'replication_factor' : 1 }" > cqlsh -e "create table test.test(pk text primary key, x text, y text)" > cqlsh -e "update test.test set x='1' where pk='1'" > nodetool flush > cqlsh -e "update test.test set x='1', y='1' where pk='1'" > nodetool flush > cqlsh -e "alter table test.test drop x" > nodetool compact test test > error: Corrupt empty row found in unfiltered partition > -- StackTrace -- > java.io.IOException: Corrupt empty row found in unfiltered partition > at > org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:382) > at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:87) > at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:65) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.doCompute(SSTableIdentityIterator.java:123) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100) > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95) > at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:509) > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:369) > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) > at > org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58) > at > org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:67) > at > org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:26) > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > at > org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:227) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:190) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > at > org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:610) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >
[jira] [Commented] (CASSANDRA-13077) Add helper comments for unittest debug
[ https://issues.apache.org/jira/browse/CASSANDRA-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940519#comment-15940519 ] Stefan Podkowinski commented on CASSANDRA-13077: What is the potential use case behind this? Why would you want to debug unit tests this way, instead of debugging a test using an IDE? > Add helper comments for unittest debug > -- > > Key: CASSANDRA-13077 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13077 > Project: Cassandra > Issue Type: Improvement > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > Fix For: 3.0.x > > Attachments: 13077-3.0.txt > > > Not an issue. Just add comments for future unittest debug. I find it useful, > hope it could be merged: > [13077-3.0.patch|https://github.com/cooldoger/cassandra/commit/91c98248a7c3427a22b564fc9e86691e7a923b95] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13373) Provide additional speculative retry statistics
[ https://issues.apache.org/jira/browse/CASSANDRA-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13373: --- Status: Patch Available (was: Open) > Provide additional speculative retry statistics > --- > > Key: CASSANDRA-13373 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13373 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > Fix For: 4.x > > > Right now there is a single metric for speculative retry on reads that is the > number of speculative retries attempted. You can't tell how many of those > actually succeeded in salvaging the read. > The metric is also per table and there is no keyspace level rollup as there > is for several other metrics. > Add a metric that counts reads that attempt to speculate but fail to complete > before the timeout (ignoring read errors). > Add a rollup metric for the current count of speculation attempts as well as > the count of failed speculations. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs
[ https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940358#comment-15940358 ] Stefan Podkowinski commented on CASSANDRA-13368: If you add something like {{logger.error("ERROR {}", "TEST", new RuntimeException());}} to CassandraDaemon.setup() and start Cassandra, you should notice that the stacktrace is printed just fine. Are you using the logback.xml from the vanilla Apache debian package? > Exception Stack not Printed as Intended in Error Logs > - > > Key: CASSANDRA-13368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13368 > Project: Cassandra > Issue Type: Bug >Reporter: William R. Speirs >Priority: Trivial > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-13368-2.1.patch > > > There are a number of instances where it appears the programmer intended to > print a stack trace in an error message, but it is not actually being > printed. For example, in {{BlacklistedDirectories.java:54}}: > {noformat} > catch (Exception e) > { > JVMStabilityInspector.inspectThrowable(e); > logger.error("error registering MBean {}", MBEAN_NAME, e); > //Allow the server to start even if the bean can't be registered > } > {noformat} > The logger will use the second argument for the braces, but will ignore the > exception {{e}}. It would be helpful to have the stack traces of these > exceptions printed. I propose adding a second line that prints the full stack > trace: {{logger.error(e.getMessage(), e);}} > On the 2.1 branch, I found 8 instances of these types of messages: > {noformat} > db/BlacklistedDirectories.java:54:logger.error("error registering > MBean {}", MBEAN_NAME, e); > io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable > {}; skipped", descriptor, e); > net/OutboundTcpConnection.java:228:logger.error("error > processing a message intended for {}", poolReference.endPoint(), e); > net/OutboundTcpConnection.java:314:logger.error("error > writing to {}", poolReference.endPoint(), e); > service/CassandraDaemon.java:231:logger.error("Exception in > thread {}", t, e); > service/CassandraDaemon.java:562:logger.error("error > registering MBean {}", MBEAN_NAME, e); > streaming/StreamSession.java:512:logger.error("[Stream #{}] > Streaming error occurred", planId(), e); > transport/Server.java:442:logger.error("Problem retrieving > RPC address for {}", endpoint, e); > {noformat} > And one where it'll print the {{toString()}} version of the exception: > {noformat} > db/Directories.java:689:logger.error("Could not calculate the > size of {}. {}", input, e); > {noformat} > I'm happy to create a patch for each branch, just need a little guidance on > how to do so. We're currently running 2.1 so I started there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13329) max_hints_delivery_threads does not work
[ https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940242#comment-15940242 ] Aleksandr Sorokoumov edited comment on CASSANDRA-13329 at 3/24/17 1:45 PM: --- {{JMXEnabledThreadPoolExecutor}} (used by {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}) extends {{DebuggableThreadPoolExecutor}}. According to the docs on {{DebuggableThreadPoolExecutor}} it works in the following way: 1. If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing. 2. If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread. 3. If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected. In both {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}, {{JMXEnabledThreadPoolExecutor}} is constructed with corePoolSize equal to 1, maximumPoolSize equal to some constant and a work queue being unbounded {{LinkedBlockingQueue}}. In that setup when there are no tasks running, the new incoming task will add a thread to the pool (according to #1). However, because the queue is unbounded, according to #2 all the consequent tasks will be added to the queue instead of adding threads to the pool. Having corePoolSize equal to maximumPoolSize solves the problem because then the pool will maintain maximumPoolSize threads and submit tasks to them before queueing. *Link to the branch*: https://github.com/Ge/cassandra/tree/13329-3.10 was (Author: ge): {{JMXEnabledThreadPoolExecutor}} (used by {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}) extends {{DebuggableThreadPoolExecutor}}. According to the docs on {{DebuggableThreadPoolExecutor}} it works in the following way: 1. If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing. 2. If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread. 3. If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected. In both {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}, {{JMXEnabledThreadPoolExecutor}} is constructed with corePoolSize equal to 1, maximumPoolSize equal to some constant and a work queue being unbounded {{LinkedBlockingQueue}}. In that setup when there are no tasks running, the new incoming task will add a thread to the pool (according to #1). However, because the queue is unbounded, according to #2 all the consequent tasks will be added to the queue instead of adding threads to the pool. Having corePoolSize equal to maximumPoolSize solves the problem because then the pool will maintain maximumPoolSize threads and submit tasks to them before queueing. *Link to the branch*: https://github.com/Ge/cassandra/tree/13186-3.10 > max_hints_delivery_threads does not work > > > Key: CASSANDRA-13329 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13329 > Project: Cassandra > Issue Type: Bug >Reporter: Fuud >Assignee: Aleksandr Sorokoumov > Labels: lhf > > HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize > == 1 and maxPoolSize==max_hints_delivery_threads and unbounded > LinkedBlockingQueue. > In this configuration additional threads will not be created. > Same problem with PerSSTableIndexWriter. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13277) Duplicate results with secondary index on static column
[ https://issues.apache.org/jira/browse/CASSANDRA-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13277: --- Reviewer: Benjamin Lerer > Duplicate results with secondary index on static column > --- > > Key: CASSANDRA-13277 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13277 > Project: Cassandra > Issue Type: Bug >Reporter: Romain Hardouin >Assignee: Andrés de la Peña > Labels: 2i > > As a follow up of > http://www.mail-archive.com/user@cassandra.apache.org/msg50816.html > Duplicate results appear with secondary index on static column with RF > 1. > Number of results vary depending on consistency level. > Here is a CCM session to reproduce the issue: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 2}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (2 rows) > cqlsh> CONSISTENCY ALL > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (3 rows) > {code} > When RF matches the number of nodes, it works as expected. > Example with RF=3 and 3 nodes: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 3}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > cqlsh> CONSISTENCY all > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > {code} > Example with RF = 2 and 2 nodes: > {code} > romain@debian:~$ ccm create 39 -n 2 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 2}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > cqlsh> CONSISTENCY ALL > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest >
[jira] [Updated] (CASSANDRA-13277) Duplicate results with secondary index on static column
[ https://issues.apache.org/jira/browse/CASSANDRA-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrés de la Peña updated CASSANDRA-13277: -- Status: Patch Available (was: Open) > Duplicate results with secondary index on static column > --- > > Key: CASSANDRA-13277 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13277 > Project: Cassandra > Issue Type: Bug >Reporter: Romain Hardouin >Assignee: Andrés de la Peña > Labels: 2i > > As a follow up of > http://www.mail-archive.com/user@cassandra.apache.org/msg50816.html > Duplicate results appear with secondary index on static column with RF > 1. > Number of results vary depending on consistency level. > Here is a CCM session to reproduce the issue: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 2}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (2 rows) > cqlsh> CONSISTENCY ALL > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (3 rows) > {code} > When RF matches the number of nodes, it works as expected. > Example with RF=3 and 3 nodes: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 3}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > cqlsh> CONSISTENCY all > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > {code} > Example with RF = 2 and 2 nodes: > {code} > romain@debian:~$ ccm create 39 -n 2 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 2}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (1 rows) > cqlsh> CONSISTENCY ALL > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest >
[jira] [Updated] (CASSANDRA-13006) Disable automatic heap dumps on OOM error
[ https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13006: --- Status: Open (was: Patch Available) > Disable automatic heap dumps on OOM error > - > > Key: CASSANDRA-13006 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13006 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: anmols >Assignee: Benjamin Lerer >Priority: Minor > Fix For: 3.0.9 > > Attachments: 13006-3.0.9.txt > > > With CASSANDRA-9861, a change was added to enable collecting heap dumps by > default if the process encountered an OOM error. These heap dumps are stored > in the Apache Cassandra home directory unless configured otherwise (see > [Cassandra Support > Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps] > for this feature). > > The creation and storage of heap dumps aides debugging and investigative > workflows, but is not be desirable for a production environment where these > heap dumps may occupy a large amount of disk space and require manual > intervention for cleanups. > > Managing heap dumps on out of memory errors and configuring the paths for > these heap dumps are available as JVM options in JVM. The current behavior > conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. > > A patch can be proposed here that would make the heap dump on OOM error honor > the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate > heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM > option. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13277) Duplicate results with secondary index on static column
[ https://issues.apache.org/jira/browse/CASSANDRA-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940277#comment-15940277 ] Andrés de la Peña commented on CASSANDRA-13277: --- The underlying problem can be reproduced with a single node: {code} CREATE KEYSPACE k WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; CREATE TABLE k.c ( pk int, ck int, sc int static primary key (pk, ck) ); CREATE index ON k.c (sc); INSERT INTO k.c (pk, ck, sc) values (1, 2, 3); INSERT INTO k.c (pk, ck, sc) values (-1, 2, 3); SELECT token(pk), pk, ck, sc FROM k.c where sc = 3 AND token(pk) > 0; system.token(pk) | pk | ck | sc --+++ -4069959284402364209 | 1 | 2 | 3 7297452126230313552 | -1 | 2 | 3 SELECT token(pk), pk, ck, sc FROM k.c where sc = 3 AND token(pk) <= 0; system.token(pk) | pk | ck | sc --+++ -4069959284402364209 | 1 | 2 | 3 7297452126230313552 | -1 | 2 | 3 {code} This is produced because {{CompositesSearcher}} doesn't verify that index hits satisfy command's key constraint when dealing with static columns, as it is done with regular columns. The provided examples don't specify key restrictions but they fail when RF is lesser than the number of nodes because they are internally split into subqueries directed to specific token ranges. Replicas ignore the token range restriction and the coordinator receives duplicate rows from unexpected token ranges, as it is shown in the previous example. An initial version of the patch can be found here. ||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:13277-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13277-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13277-trunk-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...adelapena:13277-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13277-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13277-3.11-dtest/]| > Duplicate results with secondary index on static column > --- > > Key: CASSANDRA-13277 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13277 > Project: Cassandra > Issue Type: Bug >Reporter: Romain Hardouin >Assignee: Andrés de la Peña > Labels: 2i > > As a follow up of > http://www.mail-archive.com/user@cassandra.apache.org/msg50816.html > Duplicate results appear with secondary index on static column with RF > 1. > Number of results vary depending on consistency level. > Here is a CCM session to reproduce the issue: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 2}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); > cqlsh> INSERT INTO test.idx_static (id, id2, added, source, dest) values > ('id1', 22,'2017-01-28', 'src1', 'dst1'); > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (2 rows) > cqlsh> CONSISTENCY ALL > Consistency level set to ALL. > cqlsh> SELECT * FROM test.idx_static where id2=22; > id | added | id2 | source | dest > -+-+-++-- > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > id1 | 2017-01-27 23:00:00.00+ | 22 | src1 | dst1 > (3 rows) > {code} > When RF matches the number of nodes, it works as expected. > Example with RF=3 and 3 nodes: > {code} > romain@debian:~$ ccm create 39 -n 3 -v 3.9 -s > Current cluster is now: 39 > romain@debian:~$ ccm node1 cqlsh > Connected to 39 at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 3}; > cqlsh> CREATE TABLE test.idx_static (id text, id2 bigint static, added > timestamp, source text static, dest text, primary key (id, added)); > cqlsh> CREATE index ON test.idx_static (id2); >
[jira] [Updated] (CASSANDRA-13329) max_hints_delivery_threads does not work
[ https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Sorokoumov updated CASSANDRA-13329: - Reviewer: Alex Petrov Status: Patch Available (was: Open) {{JMXEnabledThreadPoolExecutor}} (used by {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}) extends {{DebuggableThreadPoolExecutor}}. According to the docs on {{DebuggableThreadPoolExecutor}} it works in the following way: 1. If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing. 2. If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread. 3. If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected. In both {{HintsDispatchExecutor}} and {{PerSSTableIndexWriter}}, {{JMXEnabledThreadPoolExecutor}} is constructed with corePoolSize equal to 1, maximumPoolSize equal to some constant and a work queue being unbounded {{LinkedBlockingQueue}}. In that setup when there are no tasks running, the new incoming task will add a thread to the pool (according to #1). However, because the queue is unbounded, according to #2 all the consequent tasks will be added to the queue instead of adding threads to the pool. Having corePoolSize equal to maximumPoolSize solves the problem because then the pool will maintain maximumPoolSize threads and submit tasks to them before queueing. *Link to the branch*: https://github.com/Ge/cassandra/tree/13186-3.10 > max_hints_delivery_threads does not work > > > Key: CASSANDRA-13329 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13329 > Project: Cassandra > Issue Type: Bug >Reporter: Fuud >Assignee: Aleksandr Sorokoumov > Labels: lhf > > HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize > == 1 and maxPoolSize==max_hints_delivery_threads and unbounded > LinkedBlockingQueue. > In this configuration additional threads will not be created. > Same problem with PerSSTableIndexWriter. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-12859) Column-level permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940240#comment-15940240 ] Stefan Podkowinski commented on CASSANDRA-12859: I'd like to share some feedback at this point, although I'm a bit late to the discussion. *GRANT - additive or replacing?* Looks to me that preferring additive properties, for granting column permissions, is already consensus here? I'd agree to that, but I'm not so happy with the current "all columns" grant semantics. bq. Granting a table permission without specifying columns allows access to all the columns (unrestricted). Granting a table permission with a column list restricts access to those columns only (restricted to a white list of columns). This will put us in an odd situation when combining roles for both cases: {noformat} grant select to role1 grant select(a,b) to role2 grant role1, role2 to role3 {noformat} Now we'd end up with more restrictive access for role3 than one if its contained roles (role1). Or another example: {noformat} grant select on keyspace ks1 on role1; grant select(a,b) on keyspace ks1 table tb1 on role1; {noformat} Am I correct that the 2nd grant would restrict access to the listed columns and override the less restricted access from the statement before? Isn't this effectively blacklisting permissions? *IAuthorizer* Most parts of Cassandra are not build with extensibility in mind. But the AuthN/Z handling can be configured to use custom implementations of the IAuthenticator/IAuthorizer interfaces. There are third party implementations out there that depend on this and I'd prefer not to break them without at least first deprecating the old APIs. *Remaining questions* Maybe I can at least try to answer two of those.. bq. 1. Do we need to be concerned with thrift-created 'Raw' ColumnDefinitions? CASSANDRA-8178 Thrift support will be gone in 4.0 (CASSANDRA-5). Thrift specific table definitions should be migrated during the 3.0 upgrade, if I'm not wrong. So I'd assume this should not be a concern, but not 100% sure on that. bq. 2. Do we need to be concerned about legacy non-role, user auth? I don't think so. I've opened CASSANDRA-13371 for this. > Column-level permissions > > > Key: CASSANDRA-12859 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12859 > Project: Cassandra > Issue Type: New Feature > Components: Core, CQL >Reporter: Boris Melamed > Labels: doc-impacting > Attachments: Cassandra Proposal - Column-level permissions.docx, > Cassandra Proposal - Column-level permissions v2.docx > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. Here is a draft of: > Cassandra Proposal - Column-level permissions.docx (attached) > h4. Quoting the 'Overview' section: > The purpose of this proposal is to add column-level (field-level) permissions > to Cassandra. It is my intent to soon start implementing this feature in a > fork, and to submit a pull request once it’s ready. > h4. Motivation > Cassandra already supports permissions on keyspace and table (column family) > level. Sources: > * http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra > * https://cassandra.apache.org/doc/latest/cql/security.html#data-control > At IBM, we have use cases in the area of big data analytics where > column-level access permissions are also a requirement. All industry RDBMS > products are supporting this level of permission control, and regulators are > expecting it from all data-based systems. > h4. Main day-one requirements > # Extend CQL (Cassandra Query Language) to be able to optionally specify a > list of individual columns, in the {{GRANT}} statement. The relevant > permission types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and > {{SELECT}}. > # Persist the optional information in the appropriate system table > ‘system_auth.role_permissions’. > # Enforce the column access restrictions during execution. Details: > #* Should fit with the existing permission propagation down a role chain. > #* Proposed message format when a user’s roles give access to the queried > table but not to all of the selected, inserted, or updated columns: > "User %s has no %s permission on column %s of table %s" > #* Error will report only the first checked column. > Nice to have: list all inaccessible columns. > #* Error code is the same as for table access denial: 2100. > h4. Additional day-one requirements > # Reflect the column-level permissions in statements of type > {{LIST ALL PERMISSIONS OF someuser;}} > # When columns are dropped or renamed, trigger purging or adapting of their > permissions > # Performance should not degrade in any significant way. > # Backwards compatibility > #* Permission enforcement for DBs created before the
[jira] [Commented] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940188#comment-15940188 ] Marcus Eriksson commented on CASSANDRA-13324: - The new {{public static final IInternodeAuthenticator ALLOW_ALL}} in {{IInternodeAuthenticator}} should probably be removed as it is not used (alternative being to remove {{AllowAllInternodeAuthenticator.java}}, but I guess people might be using that in config files) Feel free to do that on commit, +1 > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-12811: Reviewer: Benjamin Lerer Status: Open (was: Patch Available) > testall failure in > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression > > > Key: CASSANDRA-12811 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12811 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: test-failure > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/ > {code} > Error Message > Expected empty result but got 1 rows > {code} > {code} > Stacktrace > junit.framework.AssertionFailedError: Expected empty result but got 1 rows > at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089) > at > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463) > at > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-12811: Status: Patch Available (was: Open) > testall failure in > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression > > > Key: CASSANDRA-12811 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12811 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: test-failure > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/ > {code} > Error Message > Expected empty result but got 1 rows > {code} > {code} > Stacktrace > junit.framework.AssertionFailedError: Expected empty result but got 1 rows > at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089) > at > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463) > at > org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13375) Cassandra v3.10 - UDA throws AssertionError when no records in select
[ https://issues.apache.org/jira/browse/CASSANDRA-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laurentiu Amitroaie updated CASSANDRA-13375: Description: If the aggregate function is used in a select which fetches no records there is a ServerError reported. It seems the bug is in org.apache.cassandra.cql3.functions.UDAggregate, there is an "assert !needsInit;" done on compute. Attached is a file which creates a keyspace / table / data records and UDF/UDA to replicate the issue. was: If the aggregate function is used in a select which fetches no records there is a ServerError reported. It seems the a bug in org.apache.cassandra.cql3.functions.UDAggregate, there is an "assert !needsInit;" done on compute. Attached is a file which creates a keyspace / table / data records and UDF/UDA to replicate the issue. > Cassandra v3.10 - UDA throws AssertionError when no records in select > -- > > Key: CASSANDRA-13375 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13375 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Laurentiu Amitroaie > Attachments: aggregate_with_no_records.cql > > > If the aggregate function is used in a select which fetches no records there > is a ServerError reported. > It seems the bug is in org.apache.cassandra.cql3.functions.UDAggregate, there > is an "assert !needsInit;" done on compute. > Attached is a file which creates a keyspace / table / data records and > UDF/UDA to replicate the issue. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (CASSANDRA-13375) Cassandra v3.10 - UDA throws AssertionError when no records in select
Laurentiu Amitroaie created CASSANDRA-13375: --- Summary: Cassandra v3.10 - UDA throws AssertionError when no records in select Key: CASSANDRA-13375 URL: https://issues.apache.org/jira/browse/CASSANDRA-13375 Project: Cassandra Issue Type: Bug Components: CQL Reporter: Laurentiu Amitroaie Attachments: aggregate_with_no_records.cql If the aggregate function is used in a select which fetches no records there is a ServerError reported. It seems the a bug in org.apache.cassandra.cql3.functions.UDAggregate, there is an "assert !needsInit;" done on compute. Attached is a file which creates a keyspace / table / data records and UDF/UDA to replicate the issue. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-9639) size_estimates is inacurate in multi-dc clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15939995#comment-15939995 ] Alex Petrov commented on CASSANDRA-9639: Unfortunately, added dtest was / is failing for this issue: [history|http://cassci.datastax.com/job/trunk_dtest/lastCompletedBuild/testReport/junit/topology_test/TestTopology/size_estimates_multidc_test/history/] > size_estimates is inacurate in multi-dc clusters > > > Key: CASSANDRA-9639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9639 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Chris Lohfink >Priority: Minor > Fix For: 3.0.11 > > > CASSANDRA-7688 introduced size_estimates to replace the thrift > describe_splits_ex command. > Users have reported seeing estimates that are widely off in multi-dc clusters. > system.size_estimates show the wrong range_start / range_end -- This message was sent by Atlassian JIRA (v6.3.15#6346)