[
https://issues.apache.org/jira/browse/CASSANDRA-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067783#comment-13067783
]
Sylvain Lebresne commented on CASSANDRA-2872:
---------------------------------------------
bq. Remind me why simply making sstable generation global doesn't fix this?
If you drop an index, then shutdown the node, then restart and recreate the
index. Upon restart and crawling of the existing data files, it could be that
the first available generation is the one of a sstable of the dropped index.
I guess there is two reasonably simple solutions:
# scan the (incremental) snapshot directories for the generation number too. If
we do that, I guess we don't even have to make the generation global as long as
we do this scanning each time a ColumnFamilyStore is created.
# make the generation number persistent in the system tables (again, no need to
make the number global for that).
I think I prefer the second solution because it's more general and feels more
elegant. But we would still have to scan the data dir and take the max(what we
found during scan, what's in the system table) in case people force-feed data
files that weren't created on that node (or the system tables are wiped).
That being said, I totally agree that the generation number don't have to be
partitioned if we don't want to. But not sure it's a big deal that way either.
> While dropping and recreating an index, incremental snapshotting can hang
> --------------------------------------------------------------------------
>
> Key: CASSANDRA-2872
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2872
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Affects Versions: 0.7.4
> Reporter: Sylvain Lebresne
> Priority: Minor
>
> When creating a hard link (at list with JNA), link() hang if the target of the
> link already exists. In theory though, we should not hit that situation
> because we use a new directory for each manual snapshot and the generation
> number of the sstables should prevent this from hapenning with increment
> snapshot.
> However, when you drop, then recreate a secondary index, if the sstables are
> deleted after the drop and before we recreate the index, the recreated index
> sstables will start with a generation to 0. Thus, when we start backuping them
> incrementally, it will conflict with the sstables of the previously dropped
> index.
> First, we should check for the target existance because calling link() to at
> least avoid hanging. But then we must make sure that when we drop, then
> recreate an index, we will either not name the sstables the same way or the
> incremental snapshot use a different directory.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira