[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976688#comment-13976688
 ] 

Benedict commented on CASSANDRA-6696:
-------------------------------------

The problem here is packing vnodes fairly across the disks: either we need to 
ensure that all vnodes are of roughly equal size (very difficult), or we 
probably need to have a dynamic allocation strategy, and the problem with 
_that_ is that when the token range gets redistributed by node 
additions/removals, the whole cluster suddenly needs to start kicking off 
rebalancing of their local disks.

We could support splitting the token range into M distinct chunks, where M is 
preferably some multiple of the number of disks, and split the total token 
range into M chunks, then allocate each chunk to a disk in round-robin fashion. 
This then remains deterministic, and it is I think easier to guarantee an even 
distribution within a given token range than it is to guarantee all vnodes are 
of equal size, whilst still supporting a dynamic cluster size. Even here, 
though, realistically I think we need the number of chunks to be quite a bit 
smaller than the number of vnodes to guarantee anything approaching balance of 
these chunks.

> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>             Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to