[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13899398#comment-13899398
 ] 

Benedict commented on CASSANDRA-6696:
-------------------------------------

One possibility here is that we could split bloom filter and metadata onto a 
separate disk to their data files, so that if/when a disk fails we have the 
option of scrubbing any records on the remaining disks that we think were 
present on the lost disk in a file with min_timestamp < gc_grace_seconds ago.

Once we've done the scrub (in fact it could probably be "done" instantly by 
just setting up some filter for compaction + reads until we're fully repaired 
and have compacted the old data) we can start serving reads again, and can 
start a repair from the other nodes to receive data for all of the records 
we're now missing (either through the missing disk or that we're forcefully 
trashing).

> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: sankalp kohli
>            Priority: Minor
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to