[
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14007060#comment-14007060
]
Marcus Eriksson edited comment on CASSANDRA-6696 at 5/23/14 11:36 AM:
----------------------------------------------------------------------
Just pushed a version to
https://github.com/krummas/cassandra/commits/marcuse/6696-4 - I'll spend some
more time writing tests, but I figure it is ready for feedback now atleast.
* Flush to one sstable per disk:
** Split the total range in #disks parts
** Flush whole vnodes, if a vnode starts on a disk, it stays there. Note though
that if a vnode wraps around the tokenspace, it will be split in 2 parts and be
on different disks.
* SSTables flushed during startup will not get placed correctly since we don't
yet know the local ranges.
* LeveledCompaction needs to know what ranges we have, calling startup() on the
CompactionStrategy has been moved out of the CFS constructor
* LCS:
** One manifest per vnode, with a global L0.
** L1 is now aims to contain one sstable
** Same prios as before, first STCS in L0, then compactions in L1+, and last L0
-> L1.
** STCS in L0 will create big per-disk files, not per-vnode ones.
* STCS:
** We now have L0 and L1, L1 contains per-vnode sstables, but within the
vnode-sstables we give no overlappiness-guarantees
** Compactions in L0 only include L0 sstables, and L1 compactions only include
L1 compactions, all compactions end up as per-vnode sstables in L1
** When we get 4 sstables of similar size in L0, we will compact those, and
create num_tokens L1 sstables.
** When one L1 vnode gets 4 sstables of similar size, it will compact those
together
** L0 -> L1 compactions are prioritized over L1 -> L1 ones (though, these will
run in parallel)
* Introduces originalFirst to keep track of the original first key of the
sstable, we need this when figuring out which manifest the sstable belongs to
during replace(..).
* If we get new ring version (i.e. we get a new token or lose one), we only
reinitialize the LeveledManifestWrapper, this means that we might have sstables
that start in one vnode, but does not end in it.
* "nodetool rebalancedata" will iterate over all sstables and make sure they
are in the correct places.
* If a disk breaks/runs out of space we will flush/compact to the remaining
disks
was (Author: krummas):
Just pushed a version to
https://github.com/krummas/cassandra/commits/marcuse/6696-4 - I'll spend some
more time writing tests, but I figure it is ready for feedback now atleast.
* Flush to one sstable per disk:
** Split the total range in #disks parts
** Flush whole vnodes, if a vnode starts on a disk, it stays there. Note though
that if a vnode wraps around the tokenspace, it will be split in 2 parts and be
on different disks.
* SSTables flushed during startup will not get placed correctly since we don't
yet know the local ranges.
* LeveledCompaction needs to know what ranges we have, calling startup() on the
CompactionStrategy has been moved out of the CFS constructor
* LCS:
** One manifest per vnode, with a global L0.
** L1 is now aims to contain one sstable
** Same prios as before, first STCS in L0, then compactions in L1+, and last L0
-> L1.
** STCS in L0 will create big per-disk files, not per-vnode ones.
* STCS:
** We now have L0 and L1, L1 contains per-vnode sstables, but within the
vnode-sstables, we give no overlappiness-guarantees
** Compactions in L0 only include L0 sstables, and L1 compactions only include
L1 compactions, all compactions end up as per-vnode sstables in L1
** When we get 4 sstables of similar size in L0, we will compact those, and
create num_tokens L1 sstables.
** When one L1 vnode gets 4 sstables of similar size, it will compact those
together
** L0 -> L1 compactions are prioritized over L1 -> L1 ones (though, these will
run in parallel)
* Introduces originalFirst to keep track of the original first key of the
sstable, we need this when figuring out which manifest the sstable belongs to
during replace(..).
* If we get new ring version (i.e. we get a new token or lose one), we only
reinitialize the LeveledManifestWrapper, this means that we might have sstables
that start in one vnode, but does not end in it.
* "nodetool rebalancedata" will iterate over all sstables and make sure they
are in the correct places.
* If a disk breaks/runs out of space we will flush/compact to the remaining
disks
> Drive replacement in JBOD can cause data to reappear.
> ------------------------------------------------------
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: sankalp kohli
> Assignee: Marcus Eriksson
> Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new
> empty one and repair is run.
> This can cause deleted data to come back in some cases. Also this is true for
> corrupt stables in which we delete the corrupt stable and run repair.
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days.
> row=sankalp col=sankalp is written 20 days back and successfully went to all
> three nodes.
> Then a delete/tombstone was written successfully for the same row column 15
> days back.
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B
> since it got compacted with the actual data. So there is no trace of this row
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2.
> Compaction has not yet reclaimed the data and tombstone.
> Drive2 becomes corrupt and was replaced with new empty drive.
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp
> has come back to life.
> Now after replacing the drive we run repair. This data will be propagated to
> all nodes.
> Note: This is still a problem even if we run repair every gc grace.
>
--
This message was sent by Atlassian JIRA
(v6.2#6252)