[jira] [Updated] (CASSANDRA-13561) Purge TTL on expiration
[ https://issues.apache.org/jira/browse/CASSANDRA-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Whang updated CASSANDRA-13561: - Fix Version/s: 4.0 Status: Patch Available (was: Open) Patch here https://github.com/whangsf/cassandra/commit/6f46e18988122f80608b1f5ba4a3d5c5dbbe1c61 > Purge TTL on expiration > --- > > Key: CASSANDRA-13561 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13561 > Project: Cassandra > Issue Type: New Feature >Reporter: Andrew Whang >Priority: Minor > Fix For: 4.0 > > > Tables with mostly TTL columns tend to suffer from high droppable tombstone > ratio, which results in higher read latency, cpu utilization, and disk usage. > Expired TTL data become tombstones, and the nature of purging tombstones > during compaction (due to checking for overlapping SSTables) make them > susceptible to surviving much longer than expected. A table option to purge > TTL on expiration would address this issue, by preventing them from becoming > tombstones. A boolean purge_ttl_on_expiration table setting would allow users > to easily turn the feature on or off. > Being more aggressive with gc_grace could also address the problem of long > lasting tombstones, but that would affect tombstones from deletes as well. > Even if a purged [expired] cell is revived via repair from a node that hasn't > yet compacted away the cell, it would be revived as an expiring cell with the > same localDeletionTime, so reads should properly handle them. As well, it > would be purged in the next compaction. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-13547) Filtered materialized views missing data
[ https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lerh Chuan Low reassigned CASSANDRA-13547: -- Assignee: Krishna Dattu Koneru > Filtered materialized views missing data > > > Key: CASSANDRA-13547 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13547 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views > Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce). >Reporter: Craig Nicholson >Assignee: Krishna Dattu Koneru >Priority: Blocker > Labels: materializedviews > Fix For: 3.11.x > > > When creating a materialized view against a base table the materialized view > does not always reflect the correct data. > Using the following test schema: > {code:title=Schema|language=sql} > DROP KEYSPACE IF EXISTS test; > CREATE KEYSPACE test > WITH REPLICATION = { >'class' : 'SimpleStrategy', >'replication_factor' : 1 > }; > CREATE TABLE test.table1 ( > id int, > name text, > enabled boolean, > foo text, > PRIMARY KEY (id, name)); > CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > {code} > When I insert a row into the base table the materialized views are updated > appropriately. (+) > {code:title=Insert row|language=sql} > cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', > TRUE, 'Bar'); > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > One | 1 | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > One | 1 |True | Bar > (1 rows) > {code} > Updating the record in the base table and setting enabled to FALSE will > filter the record from both materialized views. (+) > {code:title=Disable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One | False | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > (0 rows) > {code} > However a further update to the base table setting enabled to TRUE should > include the record in both materialzed views, however only one view > (table1_mv2) gets updated. (-) > It appears that only the view (table1_mv2) that returns the filtered column > (enabled) is updated. (-) > Additionally columns that are not part of the partiion or clustering key are > not updated. You can see that the foo column has a null value in table1_mv2. > (-) > {code:title=Enable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+-- > One | 1 |True | null > (1 rows) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data
[ https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036519#comment-16036519 ] Krishna Dattu Koneru commented on CASSANDRA-13547: -- Hi , I am new here and would like to work on this bug . Any help is appreciated. I see to two problems here : 1. MV's row is not updated if the updated column in base table is not in the select list of the view (even if it is in where clause) 2. columns that are not in [updated columns + view PK columns] are not updated or do not have any data after updates. For the first problem I see these two possible solutions : 1 - add a restriction to CQL syntax that all the columns that are used in where clause must be in the select list of MV 2 - Look in where clause (of MV) too to check if update to the base table column should update view or not. For the second problem, can anyone point the direction in which I should be looking ? It seems that the mutations are built correctly but some how they are not "applied" correctly. > Filtered materialized views missing data > > > Key: CASSANDRA-13547 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13547 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views > Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce). >Reporter: Craig Nicholson >Priority: Blocker > Labels: materializedviews > Fix For: 3.11.x > > > When creating a materialized view against a base table the materialized view > does not always reflect the correct data. > Using the following test schema: > {code:title=Schema|language=sql} > DROP KEYSPACE IF EXISTS test; > CREATE KEYSPACE test > WITH REPLICATION = { >'class' : 'SimpleStrategy', >'replication_factor' : 1 > }; > CREATE TABLE test.table1 ( > id int, > name text, > enabled boolean, > foo text, > PRIMARY KEY (id, name)); > CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled > FROM test.table1 > WHERE id IS NOT NULL > AND name IS NOT NULL > AND enabled = TRUE > PRIMARY KEY ((name), id); > {code} > When I insert a row into the base table the materialized views are updated > appropriately. (+) > {code:title=Insert row|language=sql} > cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', > TRUE, 'Bar'); > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > One | 1 | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > One | 1 |True | Bar > (1 rows) > {code} > Updating the record in the base table and setting enabled to FALSE will > filter the record from both materialized views. (+) > {code:title=Disable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One | False | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+- > (0 rows) > {code} > However a further update to the base table setting enabled to TRUE should > include the record in both materialzed views, however only one view > (table1_mv2) gets updated. (-) > It appears that only the view (table1_mv2) that returns the filtered column > (enabled) is updated. (-) > Additionally columns that are not part of the partiion or clustering key are > not updated. You can see that the foo column has a null value in table1_mv2. > (-) > {code:title=Enable the row|language=sql} > cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One'; > cqlsh> SELECT * FROM test.table1; > id | name | enabled | foo > +--+-+- > 1 | One |True | Bar > (1 rows) > cqlsh> SELECT * FROM test.table1_mv1; > name | id | foo > --++- > (0 rows) > cqlsh> SELECT * FROM test.table1_mv2; > name | id | enabled | foo > --++-+-- > One | 1 |True | null > (1 rows) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests
[ https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mck updated CASSANDRA-11381: Status: Patch Available (was: In Progress) > Node running with join_ring=false and authentication can not serve requests > --- > > Key: CASSANDRA-11381 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11381 > Project: Cassandra > Issue Type: Bug >Reporter: mck >Assignee: mck > Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x > > > Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has > authentication configured, eg PasswordAuthenticator, won't be able to serve > requests. This is because {{Auth.setup()}} never gets called during the > startup. > Without {{Auth.setup()}} having been called in {{StorageService}} clients > connecting to the node fail with the node throwing > {noformat} > java.lang.NullPointerException > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119) > at > org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at com.thinkaurelius.thrift.Message.invoke(Message.java:314) > at > com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689) > at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The exception thrown from the > [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119] > {code} > ResultMessage.Rows rows = > authenticateStatement.execute(QueryState.forInternalCalls(), new > QueryOptions(consistencyForUser(username), > >Lists.newArrayList(ByteBufferUtil.bytes(username; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests
[ https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036497#comment-16036497 ] mck edited comment on CASSANDRA-11381 at 6/5/17 3:51 AM: - {quote}The 3.0 branch looks like it is an older version of the patch than the 2.2, 3.11, and trunk patches - it's missing the atomic guard ensuring we only run the set up one. Is this just an oversight?{quote} Yes, thanks for catching that. Has been corrected. {quote}The new exception looks good, but the condition is too restrictive. {quote} The condition has been changed to use {{StorageService.instance.getTokenMetadata().sortedTokens().isEmpty()}}. -- All four patches updated (and rebased): || branch || testall || dtest || | [cassandra-2.2_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-2.2_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-2.2_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [cassandra-3.0_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.0_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.0_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [cassandra-3.11_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.11_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.11_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [trunk_11381|https://github.com/michaelsembwever/cassandra/tree/mck/trunk_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Ftrunk_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | All dtests are waiting on [INFRA-14153|https://issues.apache.org/jira/browse/INFRA-14153]. was (Author: michaelsembwever): {quote}The 3.0 branch looks like it is an older version of the patch than the 2.2, 3.11, and trunk patches - it's missing the atomic guard ensuring we only run the set up one. Is this just an oversight?{quote} Yes, thanks for catching that. Has been corrected. {quote}The new exception looks good, but the condition is too restrictive. {quote} The condition has been changed to use {{StorageService.instance.getTokenMetadata().sortedTokens().isEmpty()}}. -- All four patches updated (and rebased): || branch || testall || dtest || | [cassandra-2.2_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-2.2_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-2.2_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [cassandra-3.0_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.0_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.0_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/ | | [cassandra-3.11_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.11_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.11_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [trunk_11381|https://github.com/michaelsembwever/cassandra/tree/mck/trunk_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Ftrunk_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | All dtests are waiting on [INFRA-14153|https://issues.apache.org/jira/browse/INFRA-14153]. > Node running with join_ring=false and authentication can not serve requests > --- > > Key: CASSANDRA-11381 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11381 > Project: Cassandra > Issue Type: Bug >Reporter: mck >Assignee: mck > Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x > > > Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has > authentication configured, eg PasswordAuthenticator, won't be able to serve > requests. This is because {{Auth.setup()}} never gets called during the > startup. > Without {{Auth.setup()}} having been called in {{StorageService}} clients > connecting to the node fail with the node throwing > {noformat} > java.lang.NullPointerException > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119) > at > org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471) >
[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests
[ https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036497#comment-16036497 ] mck commented on CASSANDRA-11381: - {quote}The 3.0 branch looks like it is an older version of the patch than the 2.2, 3.11, and trunk patches - it's missing the atomic guard ensuring we only run the set up one. Is this just an oversight?{quote} Yes, thanks for catching that. Has been corrected. {quote}The new exception looks good, but the condition is too restrictive. {quote} The condition has been changed to use {{StorageService.instance.getTokenMetadata().sortedTokens().isEmpty()}}. -- All four patches updated (and rebased): || branch || testall || dtest || | [cassandra-2.2_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-2.2_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-2.2_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [cassandra-3.0_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.0_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.0_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/ | | [cassandra-3.11_11381|https://github.com/michaelsembwever/cassandra/tree/mck/cassandra-3.11_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Fcassandra-3.11_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | | [trunk_11381|https://github.com/michaelsembwever/cassandra/tree/mck/trunk_11381] | [testall|https://circleci.com/gh/michaelsembwever/cassandra/tree/mck%2Ftrunk_11381] | [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/] | All dtests are waiting on [INFRA-14153|https://issues.apache.org/jira/browse/INFRA-14153]. > Node running with join_ring=false and authentication can not serve requests > --- > > Key: CASSANDRA-11381 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11381 > Project: Cassandra > Issue Type: Bug >Reporter: mck >Assignee: mck > Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x > > > Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has > authentication configured, eg PasswordAuthenticator, won't be able to serve > requests. This is because {{Auth.setup()}} never gets called during the > startup. > Without {{Auth.setup()}} having been called in {{StorageService}} clients > connecting to the node fail with the node throwing > {noformat} > java.lang.NullPointerException > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119) > at > org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at com.thinkaurelius.thrift.Message.invoke(Message.java:314) > at > com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689) > at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The exception thrown from the > [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119] > {code} > ResultMessage.Rows rows = > authenticateStatement.execute(QueryState.forInternalCalls(), new > QueryOptions(consistencyForUser(username), > >Lists.newArrayList(ByteBufferUtil.bytes(username; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
[ https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036465#comment-16036465 ] Stefania edited comment on CASSANDRA-13209 at 6/5/17 2:12 AM: -- CI for 2.2 looks good as far as the proposed patch is concerned: only the known failures in cqlshlib and a client request timeout in {{test_bulk_round_trip_blogposts}} when invoking {{SELECT COUNT}} at line 2475, unrelated to this patch. I imagine we would need to reduce the number of records in the bulk tests to fix this sort of problems if they happen on ASF infra, or remove the {{SELECT COUNT}} altogether, if at all possible. So \+1 for the patch in 2.2\+, even though if may not stabilize the bulk tests fully, I expect the tests can be stabilized with changes in the tests. Do you need me to commit? was (Author: stefania): CI for 2.2 looks good as far as the proposed patch is concerned: only the known failures in cqlshlib and a client request timeout in {{test_bulk_round_trip_blogposts}} when invoking {{SELECT COUNT}} at line 2475, unrelated to this patch. I imagine we would need to reduce the number of records in the bulk tests to fix this sort of problems if they happen on ASF infra, or remove the {{SELECT COUNT}} altogether, if at all possible. So \+1 for the patch in 2.2\+, even though if may not stabilize the bulk tests fully. Do you need me to commit? > test failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections > -- > > Key: CASSANDRA-13209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13209 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Shuler >Assignee: Kurt Greaves > Labels: dtest, test-failure > Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, > node5.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections > {noformat} > Error Message > errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j > dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory > dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Running stress with user profile > /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2571, in test_bulk_round_trip_blogposts_with_max_connections > copy_from_options={'NUMPROCESSES': 2}) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2500, in _test_bulk_round_trip > num_records = create_records() > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2473, in create_records > ret = rows_to_list(self.session.execute(count_statement))[0][0] > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, >
[jira] [Updated] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
[ https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-13209: - Status: Ready to Commit (was: Patch Available) > test failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections > -- > > Key: CASSANDRA-13209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13209 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Shuler >Assignee: Kurt Greaves > Labels: dtest, test-failure > Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, > node5.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections > {noformat} > Error Message > errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j > dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory > dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Running stress with user profile > /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2571, in test_bulk_round_trip_blogposts_with_max_connections > copy_from_options={'NUMPROCESSES': 2}) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2500, in _test_bulk_round_trip > num_records = create_records() > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2473, in create_records > ret = rows_to_list(self.session.execute(count_statement))[0][0] > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > "errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4\n > >> begin captured logging << \ndtest: DEBUG: cluster ccm > directory: /tmp/dtest-792s6j\ndtest: DEBUG: Done setting configuration > options:\n{ 'initial_token': None,\n'num_tokens': '32',\n > 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j\ndtest: > DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory\ndtest: DEBUG: > cluster ccm directory: /tmp/dtest-uNMsuW\ndtest: DEBUG: Done setting > configuration options:\n{ 'initial_token': None,\n'num_tokens': '32',\n >'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n
[jira] [Updated] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
[ https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-13209: - Reviewer: Stefania > test failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections > -- > > Key: CASSANDRA-13209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13209 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Shuler >Assignee: Kurt Greaves > Labels: dtest, test-failure > Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, > node5.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections > {noformat} > Error Message > errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j > dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory > dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Running stress with user profile > /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2571, in test_bulk_round_trip_blogposts_with_max_connections > copy_from_options={'NUMPROCESSES': 2}) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2500, in _test_bulk_round_trip > num_records = create_records() > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2473, in create_records > ret = rows_to_list(self.session.execute(count_statement))[0][0] > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > "errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4\n > >> begin captured logging << \ndtest: DEBUG: cluster ccm > directory: /tmp/dtest-792s6j\ndtest: DEBUG: Done setting configuration > options:\n{ 'initial_token': None,\n'num_tokens': '32',\n > 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j\ndtest: > DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory\ndtest: DEBUG: > cluster ccm directory: /tmp/dtest-uNMsuW\ndtest: DEBUG: Done setting > configuration options:\n{ 'initial_token': None,\n'num_tokens': '32',\n >'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n >
[jira] [Commented] (CASSANDRA-13209) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections
[ https://issues.apache.org/jira/browse/CASSANDRA-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036465#comment-16036465 ] Stefania commented on CASSANDRA-13209: -- CI for 2.2 looks good as far as the proposed patch is concerned: only the known failures in cqlshlib and a client request timeout in {{test_bulk_round_trip_blogposts}} when invoking {{SELECT COUNT}} at line 2475, unrelated to this patch. I imagine we would need to reduce the number of records in the bulk tests to fix this sort of problems if they happen on ASF infra, or remove the {{SELECT COUNT}} altogether, if at all possible. So \+1 for the patch in 2.2\+, even though if may not stabilize the bulk tests fully. Do you need me to commit? > test failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts_with_max_connections > -- > > Key: CASSANDRA-13209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13209 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Shuler >Assignee: Kurt Greaves > Labels: dtest, test-failure > Attachments: 13209.patch, node1.log, node2.log, node3.log, node4.log, > node5.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts_with_max_connections > {noformat} > Error Message > errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /tmp/dtest-792s6j > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-792s6j > dtest: DEBUG: clearing ssl stores from [/tmp/dtest-792s6j] directory > dtest: DEBUG: cluster ccm directory: /tmp/dtest-uNMsuW > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > cassandra.policies: INFO: Using datacenter 'datacenter1' for > DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify > a local_dc to the constructor, or limit contact points to local cluster nodes > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > cassandra.cluster: INFO: New Cassandra host > discovered > dtest: DEBUG: Running stress with user profile > /home/automaton/cassandra-dtest/cqlsh_tests/blogposts.yaml > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/dtest.py", line 1090, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2571, in test_bulk_round_trip_blogposts_with_max_connections > copy_from_options={'NUMPROCESSES': 2}) > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2500, in _test_bulk_round_trip > num_records = create_records() > File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", > line 2473, in create_records > ret = rows_to_list(self.session.execute(count_statement))[0][0] > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > "errors={'127.0.0.4': 'Client request timeout. See > Session.execute[_async](timeout)'}, last_host=127.0.0.4\n > >> begin captured logging << \ndtest: DEBUG: cluster ccm > directory: /tmp/dtest-792s6j\ndtest: DEBUG: Done setting configuration > options:\n{ 'initial_token': None,\n'num_tokens': '32',\n > 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms':
[jira] [Commented] (CASSANDRA-13559) Schema version id mismatch while upgrading to 3.0.13
[ https://issues.apache.org/jira/browse/CASSANDRA-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16036449#comment-16036449 ] Stefania commented on CASSANDRA-13559: -- {{NEWS.txt}} updated (only for schema migrations and only in 3.0.x), see commit [6b36d9|https://github.com/apache/cassandra/commit/6b36d9f0506351f03555efaa3a0784d097913adf]. Pull request for upgrade test created [here|https://github.com/riptano/cassandra-dtest/pull/1477]. > Schema version id mismatch while upgrading to 3.0.13 > > > Key: CASSANDRA-13559 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13559 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Blocker > Fix For: 3.0.14, 3.11.0 > > > As the order of SchemaKeyspace is changed ([6991556 | > https://github.com/apache/cassandra/commit/6991556e431a51575744248a4c484270c4f918c9], > CASSANDRA-12213), the result of function > [{{calculateSchemaDigest}}|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/schema/SchemaKeyspace.java#L311] > is also changed for the same schema. Which causes schema mismatch while > upgrading 3.0.x -> 3.0.13. > It could cause cassandra fail to start because Unknown CF exception. And > streaming will fail: > {noformat} > ERROR [main] 2017-05-26 18:58:57,572 CassandraDaemon.java:709 - Exception > encountered during startup > java.lang.IllegalArgumentException: Unknown CF > 83c8eae0-3a65-11e7-9a27-e17fd11571e3 > {noformat} > {noformat} > WARN [MessagingService-Incoming-/IP] 2017-05-26 19:27:11,523 > IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from > socket; closing > org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for > cfId 922b7940-3a65-11e7-adf3-a3ff55d9bcf1. If a table was just created, this > is likely due to the schema not being fully propagated. Please wait for > schema agreement on table creation. > {noformat} > Restart the new node will cause: > {noformat} > Exception (java.lang.NoSuchFieldError) encountered during startup: ALL > java.lang.NoSuchFieldError: ALL > at > org.apache.cassandra.service.ClientState.(ClientState.java:67) > at > org.apache.cassandra.cql3.QueryProcessor$InternalStateInstance.(QueryProcessor.java:155) > at > org.apache.cassandra.cql3.QueryProcessor$InternalStateInstance.(QueryProcessor.java:149) > at > org.apache.cassandra.cql3.QueryProcessor.internalQueryState(QueryProcessor.java:163) > at > org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:286) > at > org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:294) > at > org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:900) > at > org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354) > at > org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) > {noformat} > I would suggest to have the older list back for digest calculation and > release 3.0.14. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests
[ https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mck updated CASSANDRA-11381: Status: In Progress (was: Patch Available) > Node running with join_ring=false and authentication can not serve requests > --- > > Key: CASSANDRA-11381 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11381 > Project: Cassandra > Issue Type: Bug >Reporter: mck >Assignee: mck > Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x > > > Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has > authentication configured, eg PasswordAuthenticator, won't be able to serve > requests. This is because {{Auth.setup()}} never gets called during the > startup. > Without {{Auth.setup()}} having been called in {{StorageService}} clients > connecting to the node fail with the node throwing > {noformat} > java.lang.NullPointerException > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119) > at > org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at com.thinkaurelius.thrift.Message.invoke(Message.java:314) > at > com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689) > at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The exception thrown from the > [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119] > {code} > ResultMessage.Rows rows = > authenticateStatement.execute(QueryState.forInternalCalls(), new > QueryOptions(consistencyForUser(username), > >Lists.newArrayList(ByteBufferUtil.bytes(username; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3d2f0654 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3d2f0654 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3d2f0654 Branch: refs/heads/trunk Commit: 3d2f06547c55bb8160c50ee6751b641b543d6f85 Parents: 3e73d7f d8a3aa4 Author: Stefania AlborghettiAuthored: Mon Jun 5 08:50:51 2017 +0800 Committer: Stefania Alborghetti Committed: Mon Jun 5 08:50:51 2017 +0800 -- -- - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[3/6] cassandra git commit: Ninja: update NEWS.txt for CASSANDRA-13559
Ninja: update NEWS.txt for CASSANDRA-13559 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b36d9f0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b36d9f0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b36d9f0 Branch: refs/heads/trunk Commit: 6b36d9f0506351f03555efaa3a0784d097913adf Parents: 6bf5cf7 Author: Stefania AlborghettiAuthored: Mon Jun 5 08:48:30 2017 +0800 Committer: Stefania Alborghetti Committed: Mon Jun 5 08:48:30 2017 +0800 -- NEWS.txt | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b36d9f0/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index a92fc5d..6790e6b 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,7 +18,11 @@ using the provided 'sstableupgrade' tool. Upgrading - - - Nothing specific to this release, but please see previous versions upgrading section, + - If performing a rolling upgrade from 3.0.13, there will be a schema mismatch caused + by a bug with the schema digest calculation in 3.0.13. This will cause unnecessary + but otherwise harmless schema updates, see CASSANDRA-13559 for more details. + + - Nothing else specific to this release, but please see previous versions upgrading section, especially if you are upgrading from 2.2. 3.0.13 - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org