[
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073835#comment-15073835
]
Marcus Eriksson commented on CASSANDRA-6696:
--------------------------------------------
pushed 2 new commits to the
[repo|https://github.com/krummas/cassandra/commits/marcuse/6696-11] - first one
fixes a bug where streaming would never finish if we streamed to several
sstables (CASSANDRA-10949).
The second commit fixes the progress reporting by introducing a writer id that
we key on instead of file name, this means that the file name will change in
netstats, but the progress will be correct.
http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-6696-11-testall
http://cassci.datastax.com/view/Dev/view/krummas/job/6696_dtest
[~philipthompson] could you have a look at the dtests/ccm changes as well so we
can merge them at the same time?
https://github.com/krummas/cassandra-dtest/commits/marcuse/6696 and
https://github.com/krummas/ccm/commits/multi-data-dirs
> Partition sstables by token range
> ---------------------------------
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
> Issue Type: Improvement
> Reporter: sankalp kohli
> Assignee: Marcus Eriksson
> Labels: compaction, correctness, dense-storage,
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new
> empty one and repair is run.
> This can cause deleted data to come back in some cases. Also this is true for
> corrupt stables in which we delete the corrupt stable and run repair.
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days.
> row=sankalp col=sankalp is written 20 days back and successfully went to all
> three nodes.
> Then a delete/tombstone was written successfully for the same row column 15
> days back.
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B
> since it got compacted with the actual data. So there is no trace of this row
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2.
> Compaction has not yet reclaimed the data and tombstone.
> Drive2 becomes corrupt and was replaced with new empty drive.
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp
> has come back to life.
> Now after replacing the drive we run repair. This data will be propagated to
> all nodes.
> Note: This is still a problem even if we run repair every gc grace.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)