This is not guaranteed to be safe

If the corrupted sstable has a tombstone past gc grace, and another sstable has 
shadowed deleted data, removing the corrupt sstable will cause the data to come 
back to life, and repair will spread it around the ring

If that’s problematic to you, you should consider the entire node failed, run 
repair among the surviving replicas and then replace the down server

If you don’t do deletes, and write with consistency higher than ONE, there’s a 
bit less risk to removing a single sstable


-- 
Jeff Jirsa


> On Nov 2, 2017, at 7:58 PM, sai krishnam raju potturi <pskraj...@gmail.com> 
> wrote:
> 
> Yes. Move the corrupt sstable, and run a repair on this node, so that it gets 
> in sync with it's peers.
> 
>> On Thu, Nov 2, 2017 at 6:12 PM, Shashi Yachavaram <shashi...@gmail.com> 
>> wrote:
>> We are cassandra 2.0.17 and have corrupted sstables. Ran offline 
>> sstablescrub but it fails with OOM. Increased the MAX_HEAP_SIZE to 8G it 
>> still fails. 
>> 
>> Can we move the corrupted sstable file and rerun sstablescrub followed by 
>> repair.
>> 
>> -shashi..
> 

Reply via email to