after removenode. how to restore
I've seen this stacktrace before:
WARN [SharedPool-Worker-1] 2020-05-18 10:22:29,152
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
Thread[SharedPool-Worker-1,5,main]: {}
org.apache.cassandra.io.sstable.CorruptSSTableException
I've seen this stacktrace before:
WARN [SharedPool-Worker-1] 2020-05-18 10:22:29,152
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[SharedPool-Worker-1,5,main]: {}
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted:
>
dra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:439)
> ~[apache-cassandra-3.7.jar:3.7]
> at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(
> RepairMessageVerbHandler.java:169) ~[apache-cassandra-3.7.jar:3.7]
> at
> org.apache.cassandra.
Cluster corrupt after removenode. how to restore
Running cassandra 3.7
our TEST cluster has 6 nodes, 3 in each data center
replication factor 2 for keyspaces.
we added 1 new node in each data center for testing making it 8 node cluster.
We decided to remove the 2 new nodes from cluster, but instead
Do you mean that you want to fix sstable table corrupt error and don't mind
the testing data ? You may run nodetool scrub
or nodetool upgradesstable -a( -a is
re-write to current version).
Thanks,
James
On Mon, May 18, 2020 at 12:54 PM Leena Ghatpande
wrote:
> Running cassandra 3.7
>
On face value, it looks to me that your recovery approach is sound (but of
course, the devil is in the details). If you're getting inconsistent
results, try running the same query in cqlsh with CONSISTENCY ALL (to force
a read-repair from both replicas). If you get the expected result, that
would
Running cassandra 3.7
our TEST cluster has 6 nodes, 3 in each data center
replication factor 2 for keyspaces.
we added 1 new node in each data center for testing making it 8 node cluster.
We decided to remove the 2 new nodes from cluster, but instead of decommission,
the admin just deleted the