[
https://issues.apache.org/jira/browse/CASSANDRA-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12799209#action_12799209
]
Jaakko Laine commented on CASSANDRA-620:
----------------------------------------
Some initial comments/questions (I'll have another look at this tomorrow):
(1) Do we need to have pending ranges per table? It should be enough to have
them per replication strategy. Same applies to related methods in
StorageService (restoreReplicaCount, calculatePendingRanges, etc)
(2) Is setPendingRanges atomic? Previously the whole data structure was
replaced in one assign, now it is modified while clients might have handles to
it.
(3) Who "owns" token metadata? StorageService handles metadata to strategy, but
later (in calculatePendingRanges) gets it again from strategy.
ARS.getTokenMetadata seems to be called only by StorageService.
(4) Bootstrap sources should be recorded per table. If there are multiple
tables that are streamed from the same source, it will be removed after the
first one is complete. The node might start serving reads before it has
completed bootstrap.
> Add per-keyspace replication factor (possibly even replication strategy)
> ------------------------------------------------------------------------
>
> Key: CASSANDRA-620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-620
> Project: Cassandra
> Issue Type: New Feature
> Reporter: Jonathan Ellis
> Assignee: Gary Dusbabek
> Fix For: 0.9
>
> Attachments:
> 0001-push-replication-factor-and-strategy-into-table-exce.patch,
> 0002-cleaned-up-as-much-as-possible-before-dealing-with-r.patch,
> 0003-push-table-names-into-streaming-expose-TMD-in-ARS.patch,
> 0004-fix-non-compiling-tests.patch,
> 0005-introduce-table-into-pending-ranges-code.patch,
> 0006-added-additional-testing-keyspace.patch,
> 0007-modify-TestRingCache-to-make-it-easier-to-test-speci.patch,
> 0008-push-endpoint-snitch-into-keyspace-configuration.patch,
> 0009-Marked-a-few-AntiEntropyServices-methods-as-private-.patch
>
>
> (but partitioner may only be cluster-wide, still)
> not 100% sure this makes sense but it would allow maintaining system metadata
> in a replicated-across-entire-cluster keyspace (without ugly special casing),
> as well as making Cassandra more flexible as a shared resource for multiple
> apps
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.