Hi all,

I'm looking to understand Cassandra's behavior in an sstable corruption
scenario, and what the minimum amount of work is that needs to be done to
remove a bad sstable file.

Consider: 3 node, RF 3 cluster, reads/writes at quorum
SStable corruption exception on one node at
keyspace1/table1/lb-1-big-Data.db
Sstablescrub does not work.

Is it safest to, after running a repair on the two live nodes,
1) Delete only keyspace1/table1/lb-1-big-Data.db,
2) Delete all files associated with that sstable (i.e.,
keyspace1/table1/lb-1-*),
3) Delete all files under keyspace1/table1/, or
4) Any of the above are the same from a correctness perspective.

Thanks,
Leon

Reply via email to