[
https://issues.apache.org/jira/browse/CASSANDRA-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379510#comment-15379510
]
Jim Witschey commented on CASSANDRA-11927:
------------------------------------------
The test asserts 20 times that a query hits the right nodes. In the test's
output, that assertion does actually run 20 times:
{code}
Unexpected error in log, see stdout
-------------------- >> begin captured logging << --------------------
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c4_IWW
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{ 'num_tokens': None,
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 10000,
'read_request_timeout_in_ms': 10000,
'request_timeout_in_ms': 10000,
'truncate_request_timeout_in_ms': 10000,
'write_request_timeout_in_ms': 10000}
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG:
replicas should be: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: replicas were: set(['127.0.0.3', '127.0.0.2', '127.0.0.1'])
dtest: DEBUG: removing ccm cluster test at: /mnt/tmp/dtest-c4_IWW
dtest: DEBUG: clearing ssl stores from [/mnt/tmp/dtest-c4_IWW] directory
--------------------- >> end captured logging << ---------------------
{code}
So, the error found in the logs, presumably logged at table creation, didn't
break the behavior tested for here.
Marking as a bug and unassigning to add to dev queue.
> dtest failure in replication_test.ReplicationTest.simple_test
> -------------------------------------------------------------
>
> Key: CASSANDRA-11927
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11927
> Project: Cassandra
> Issue Type: Test
> Reporter: Sean McCarthy
> Assignee: Jim Witschey
> Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log,
> node3.log, node3_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/387/testReport/replication_test/ReplicationTest/simple_test
> Failed on CassCI build trunk_novnode_dtest #387
> Logs are attached.
> Unexpected error in question:
> {code}
> ERROR [SharedPool-Worker-1] 2016-05-30 16:00:17,211 Keyspace.java:504 -
> Attempting to mutate non-existant table 99f5be60-267f-11e6-ad5f-f13d771494ea
> (test.test)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)