[ 
https://issues.apache.org/jira/browse/CASSANDRA-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12828614#action_12828614
 ] 

Jaakko Laine commented on CASSANDRA-620:
----------------------------------------

Yes, pending ranges is range->list<InetAddress> mapping. Replication strategy 
dictates *what* nodes there are in the list and in which order, whereas 
replication factor dictates the *length* of the list. If we have two tables 
with same replication factor, it would be sufficient to calculate pending 
ranges once for maximum replication factor for that table, since pending ranges 
for smaller factor will be a subset of the other. An example to illustrate this:

Suppose we have TableA and TableB using same strategy, with replication factors 
2 and 3 respectively. Suppose there are nodes NodeA, NodeB and NodeC in the 
cluster and NodeD boots "behind" NodeC. In this situation pending ranges for 
NodeD would be:

TableA: B-C, C-D
TableB: A-B, B-C, C-D

Notice that pending ranges for TableA is a subset of pending ranges for TableB. 
Instead of having pending ranges per table, it would be enough to have them per 
replication strategy in use. We could then just pick replication factor number 
of nodes from the beginning of the list.

Anyway, as said before, this is an optimization which can be done later if 
needed.

The patchset loogs OK to me.


> Add per-keyspace replication factor (possibly even replication strategy)
> ------------------------------------------------------------------------
>
>                 Key: CASSANDRA-620
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-620
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Jonathan Ellis
>            Assignee: Gary Dusbabek
>             Fix For: 0.6
>
>         Attachments: 
> 0001-push-replication-factor-and-strategy-into-table-exce.patch, 
> 0002-cleaned-up-as-much-as-possible-before-dealing-with-r.patch, 
> 0003-push-table-names-into-streaming-expose-TMD-in-ARS.patch, 
> 0004-fix-non-compiling-tests.-necessary-changes-in-test-c.patch, 
> 0005-introduce-table-into-pending-ranges-code.patch, 
> 0006-added-additional-testing-keyspace.patch, 
> 0007-modify-TestRingCache-to-make-it-easier-to-test-speci.patch, 
> 0008-push-endpoint-snitch-into-keyspace-configuration.patch, 
> 0009-make-TMD-private-in-ARS.patch, 
> 0010-fix-problems-with-bootstrapping.patch, 
> 0011-remove-replicas-from-ARS.patch, 
> 0012-ensure-that-unbootstrap-calls-onFinish-after-tables-.patch, 
> 0013-adjustments-for-new-clusterprobe-tool.patch, v1-patches.tgz, 
> v2-patches.tgz
>
>
> (but partitioner may only be cluster-wide, still)
> not 100% sure this makes sense but it would allow maintaining system metadata 
> in a replicated-across-entire-cluster keyspace (without ugly special casing), 
> as well as making Cassandra more flexible as a shared resource for multiple 
> apps

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to