[ https://issues.apache.org/jira/browse/CASSANDRA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Bing Wu updated CASSANDRA-12860: -------------------------------- Description: Summary of symptom: - Set up is a multi-region cluster in AWS (5 regions). Each region has at least 4 hosts with RF=1/2 number of nodes, using V-nodes (256) - How to reproduce: -- On node A, start this repair job (again we are running fresh 3.5.0): {code}nohup sudo nodetool repair -j 2 -pr -full myks > /tmp/repair.log 2>&1 &{code} -- Job starts fine, reporting progress like {noformat} [2016-10-28 22:37:52,692] Starting repair command #1, repairing keyspace myks with repair options (parallelism: parallel, primary range: true, incremental: false, job threads: 2, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 256) [2016-10-28 22:38:35,099] Repair session 36f13450-9d5f-11e6-8bf7-a9f47ff986a9 for range [(4029874034937227774,4033949979656106020]] finished (progress: 1%) [2016-10-28 22:38:38,769] Repair session 36f30910-9d5f-11e6-8bf7-a9f47ff986a9 for range [(-2395606719402271267,-2394525508513518837]] finished (progress: 1%) [2016-10-28 22:38:48,521] Repair session 36f3f370-9d5f-11e6-8bf7-a9f47ff986a9 for range [(-5223108861718702793,-5221117649630514419]] finished (progress: 2%) {noformat} -- Then manually shutdown another node (node B) in the same region (haven't tried with other region yet but expect the same behavior from past experience) -- Shortly after that seeing this message from job log (as well as in system.log) on node A: {noformat} [2016-10-28 22:41:46,268] Repair session 37088ce1-9d5f-11e6-8bf7-a9f47ff986a9 for range [(-928974038666914990,-927967994563261540]] failed with error Endpoint /node_B_ip died (progress: 51%) {noformat} -- From this point on, repair job seems to hang: --- no further messages from job log --- nor any related messages in system.log --- CPU stayed low (low single digit percent of 1 CPU) -- After an hour (1hr), manually kill the repair jobs using "ps -eaf | grep repair" -- Restart C* on node A --- Verified system is up and no error messages in system.log --- Also verified that there is no error messages from node B -- After node A settles down (e.g. no new messages from system.log), restart the same repair job: {code}nohup sudo nodetool repair -j 2 -pr -full myks > /tmp/repair.log 2>&1 &{code} -- Job failes pretty quickly, reporting error from more nodes B and K: {noformat} <production>[y...@cass-tm-1b-012.apse1.mashery.com ~]$ tail -f /tmp/repair.log [2016-10-28 22:49:52,965] Starting repair command #1, repairing keyspace myks with repair options (parallelism: parallel, primary range: true, incremental: false, job threads: 2, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 256) [2016-10-28 22:50:15,839] Repair session e4180720-9d60-11e6-b2f9-cb9524b3c536 for range [(4029874034937227774,4033949979656106020]] failed with error [repair #e4180720-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, [(4029874034937227774,4033949979656106020]]] Validation failed in /node_K_ip (progress: 1%) [2016-10-28 22:50:17,158] Repair session e419dbe0-9d60-11e6-b2f9-cb9524b3c536 for range [(-2395606719402271267,-2394525508513518837]] failed with error [repair #e419dbe0-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, [(-2395606719402271267,-2394525508513518837]]] Validation failed in /node_B_ip (progress: 1%) [2016-10-28 22:50:18,256] Repair session e41b1460-9d60-11e6-b2f9-cb9524b3c536 for range [(-5223108861718702793,-5221117649630514419]] failed with error [repair #e41b1460-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, [(-5223108861718702793,-5221117649630514419]]] Validation failed in /node_B_ip (progress: 2%) {noformat} -- On the said nodes (B and K) seeing similar errors: {noformat} ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,307 CompactionManager.java:1320 - Cannot start multiple repair sessions over the same sstables ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,307 Validator.java:261 - Failed creating a merkle tree for [repair #14378ec0-9d62-11e6-ab75-cd4d64a01b02 on oauth2/atokens, [(4029874034937227774,4033949979656106020]]], /node_B_ip (see log for details) INFO [AntiEntropyStage:1] 2016-10-28 22:58:45,307 Validator.java:274 - [repair #14378ec0-9d62-11e6-ab75-cd4d64a01b02] Sending completed merkle tree to /52.220.127.190 for myks.xtable ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,308 CassandraDaemon.java:195 - Exception in thread Thread[ValidationExecutor:5,1,main] java.lang.RuntimeException: Cannot start multiple repair sessions over the same sstables at org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1321) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1211) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:841) ~[apache-cassandra-3.5.0.jar:3.5.0] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] {noformat} -- At this point, we are back to where we were: kill the repair job on node A, then restart C* on BOTH nodes A and K, but still seeing the same error. was: Summary of symptom: - Set up is a multi-region cluster in AWS (5 regions). Each region has at least 4 hosts with RF=1/2 number of nodes, using V-nodes (256) - How to reproduce: -- On node A, start this repair job (again we are running fresh 3.5.0): {code} sudo nodetool repair -pr my_keyspace > /tmp/repair.log 2>&1 & {code} -- Job starts fine, reporting progress like {noformat} [2016-10-28 21:57:44,427] Repair session 03b6ca61-9d59-11e6-b118-b9abfef3117a for range [(2427717901143689479,2428773541412139342]] finished (progress: 30%){noformat} -- Then manually shutdown another node (node B) in the same region (haven't tried with other region yet but expect the same behavior from past experience) -- Shortly after that seeing this message from job log (as well as in system.log) on node A: {noformat} [2016-10-28 21:59:40,835] Repair session 04000861-9d59-11e6-b118-b9abfef3117a for range [(6981391007853361210,6983870256023436902]] failed with error Endpoint /52.220.127.177 died (progress: 59%) {noformat} -- At this point, repair job seems to hang: --- no further messages from job log --- nor any related messages in system.log --- CPU stayed low (<5%) -- After an hour (1hr), manually kill the repair jobs using "ps -eaf | grep repair" -- Restart C* on node A --- Verified system is up and no error messages in system.log --- Also verified that there is no error messages from node B -- After node A settles down (e.g. no new messages from system.log), restart the same repair job: {code} sudo nodetool repair -pr my_keyspace > /tmp/repair.log 2>&1 & {code} -- Job failes pretty quickly, reporting error from another node K: {noformat} <production>[y...@cass-tm-1b-012.apse1.mashery.com ~]$ tail -f /tmp/repair.log nohup: ignoring input [2016-10-28 22:15:31,770] Starting repair command #1, repairing keyspace my_keyspace with repair options (parallelism: parallel, primary range: true, incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 256) [2016-10-28 22:15:55,375] Repair session 17b7c390-9d5c-11e6-ba28-61f7d2732e5e for range [(4029874034937227774,4033949979656106020]] failed with error [repair #17b7c390-9d5c-11e6-ba28-61f7d2732e5e on my_keyspace/atable, [(4029874034937227774,4033949979656106020]]] Validation failed in /NodeK (progress: 1%) {noformat} -- Go to node K and tail/view system.log, seeing: {noformat} ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,226 CompactionManager.java:1320 - Cannot start multiple repair sessions over the same sstables ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,226 Validator.java:261 - Failed creating a merkle tree for [repair #17b7c390-9d5c-11e6-ba28-61f7d2732e5e on my_keyspace/atable, [(4029874034937227774,4033949979656106020]]], /52.220.127.190 (see log for details) ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,227 CassandraDaemon.java:195 - Exception in thread Thread[ValidationExecutor:3,1,main] java.lang.RuntimeException: Cannot start multiple repair sessions over the same sstables at org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1321) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1211) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:841) ~[apache-cassandra-3.5.0.jar:3.5.0] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,468 CompactionManager.java:1320 - Cannot start multiple repair sessions over the same sstables ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,468 Validator.java:261 - Failed creating a merkle tree for [repair #17b7c390-9d5c-11e6-ba28-61f7d2732e5e on my_keyspace/btable, [(4029874034937227774,4033949979656106020]]], /52.220.127.190 (see log for details) ERROR [ValidationExecutor:3] 2016-10-28 22:15:55,469 CassandraDaemon.java:195 - Exception in thread Thread[ValidationExecutor:3,1,main] java.lang.RuntimeException: Cannot start multiple repair sessions over the same sstables at org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1321) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1211) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) ~[apache-cassandra-3.5.0.jar:3.5.0] at org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:841) ~[apache-cassandra-3.5.0.jar:3.5.0] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_102] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] {noformat} -- At this point, we are back to where we were: we had to kill the repair job on node A, then restart C* on node K, and seeing the same error (cannot > Nodetool repair fragile: cannot properly recover from single node failure. > Has to restart all nodes in order to repair again > ---------------------------------------------------------------------------------------------------------------------------- > > Key: CASSANDRA-12860 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12860 > Project: Cassandra > Issue Type: Bug > Environment: CentOS 6.7, Java HotSpot(TM) 64-Bit Server VM (build > 25.102-b14, mixed mode), Cassandra 3.5.0, fresh install > Reporter: Bing Wu > Priority: Critical > > Summary of symptom: > - Set up is a multi-region cluster in AWS (5 regions). Each region has at > least 4 hosts with RF=1/2 number of nodes, using V-nodes (256) > - How to reproduce: > -- On node A, start this repair job (again we are running fresh 3.5.0): > {code}nohup sudo nodetool repair -j 2 -pr -full myks > /tmp/repair.log 2>&1 > &{code} > -- Job starts fine, reporting progress like {noformat} > [2016-10-28 22:37:52,692] Starting repair command #1, repairing keyspace myks > with repair options (parallelism: parallel, primary range: true, incremental: > false, job threads: 2, ColumnFamilies: [], dataCenters: [], hosts: [], # of > ranges: 256) > [2016-10-28 22:38:35,099] Repair session 36f13450-9d5f-11e6-8bf7-a9f47ff986a9 > for range [(4029874034937227774,4033949979656106020]] finished (progress: 1%) > [2016-10-28 22:38:38,769] Repair session 36f30910-9d5f-11e6-8bf7-a9f47ff986a9 > for range [(-2395606719402271267,-2394525508513518837]] finished (progress: > 1%) > [2016-10-28 22:38:48,521] Repair session 36f3f370-9d5f-11e6-8bf7-a9f47ff986a9 > for range [(-5223108861718702793,-5221117649630514419]] finished (progress: > 2%) > {noformat} > -- Then manually shutdown another node (node B) in the same region (haven't > tried with other region yet but expect the same behavior from past experience) > -- Shortly after that seeing this message from job log (as well as in > system.log) on node A: {noformat} > [2016-10-28 22:41:46,268] Repair session 37088ce1-9d5f-11e6-8bf7-a9f47ff986a9 > for range [(-928974038666914990,-927967994563261540]] failed with error > Endpoint /node_B_ip died (progress: 51%) > {noformat} > -- From this point on, repair job seems to hang: > --- no further messages from job log > --- nor any related messages in system.log > --- CPU stayed low (low single digit percent of 1 CPU) > -- After an hour (1hr), manually kill the repair jobs using "ps -eaf | grep > repair" > -- Restart C* on node A > --- Verified system is up and no error messages in system.log > --- Also verified that there is no error messages from node B > -- After node A settles down (e.g. no new messages from system.log), restart > the same repair job: {code}nohup sudo nodetool repair -j 2 -pr -full myks > > /tmp/repair.log 2>&1 &{code} > -- Job failes pretty quickly, reporting error from more nodes B and K: > {noformat} <production>[y...@cass-tm-1b-012.apse1.mashery.com ~]$ tail -f > /tmp/repair.log > [2016-10-28 22:49:52,965] Starting repair command #1, repairing keyspace myks > with repair options (parallelism: parallel, primary range: true, incremental: > false, job threads: 2, ColumnFamilies: [], dataCenters: [], hosts: [], # of > ranges: 256) > [2016-10-28 22:50:15,839] Repair session e4180720-9d60-11e6-b2f9-cb9524b3c536 > for range [(4029874034937227774,4033949979656106020]] failed with error > [repair #e4180720-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, > [(4029874034937227774,4033949979656106020]]] Validation failed in /node_K_ip > (progress: 1%) > [2016-10-28 22:50:17,158] Repair session e419dbe0-9d60-11e6-b2f9-cb9524b3c536 > for range [(-2395606719402271267,-2394525508513518837]] failed with error > [repair #e419dbe0-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, > [(-2395606719402271267,-2394525508513518837]]] Validation failed in > /node_B_ip (progress: 1%) > [2016-10-28 22:50:18,256] Repair session e41b1460-9d60-11e6-b2f9-cb9524b3c536 > for range [(-5223108861718702793,-5221117649630514419]] failed with error > [repair #e41b1460-9d60-11e6-b2f9-cb9524b3c536 on myks/rtable, > [(-5223108861718702793,-5221117649630514419]]] Validation failed in > /node_B_ip (progress: 2%) > {noformat} > -- On the said nodes (B and K) seeing similar errors: {noformat} > ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,307 > CompactionManager.java:1320 - Cannot start multiple repair sessions over the > same sstables > ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,307 Validator.java:261 - > Failed creating a merkle tree for [repair > #14378ec0-9d62-11e6-ab75-cd4d64a01b02 on oauth2/atokens, > [(4029874034937227774,4033949979656106020]]], /node_B_ip (see log for details) > INFO [AntiEntropyStage:1] 2016-10-28 22:58:45,307 Validator.java:274 - > [repair #14378ec0-9d62-11e6-ab75-cd4d64a01b02] Sending completed merkle tree > to /52.220.127.190 for myks.xtable > ERROR [ValidationExecutor:5] 2016-10-28 22:58:45,308 CassandraDaemon.java:195 > - Exception in thread Thread[ValidationExecutor:5,1,main] > java.lang.RuntimeException: Cannot start multiple repair sessions over the > same sstables > at > org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1321) > ~[apache-cassandra-3.5.0.jar:3.5.0] > at > org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1211) > ~[apache-cassandra-3.5.0.jar:3.5.0] > at > org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) > ~[apache-cassandra-3.5.0.jar:3.5.0] > at > org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:841) > ~[apache-cassandra-3.5.0.jar:3.5.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_102] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_102] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_102] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] > {noformat} > -- At this point, we are back to where we were: kill the repair job on node > A, then restart C* on BOTH nodes A and K, but still seeing the same error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)