[ 
https://issues.apache.org/jira/browse/CASSANDRA-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12799546#action_12799546
 ] 

Gary Dusbabek commented on CASSANDRA-620:
-----------------------------------------

Jaakko,

1) As long as the 'replica_' member of ARS can safely be considered as a 
maximum, then there is no reason we can't limit the number of instances.  The 
fact that it is used to populate in 'getNaturalEndpoints' made me unsure.

2) No. This could be addressed by making 'TMD.pendingRanges' a ConcurrentMap 
(preferred) or by synchronizing SS.calculatePendingRanges().  I suppose this 
doesn't address the fact that the contents could change while whoever called 
getPendingRanges() is using the data, but that we had that problem before.

3)  StorageService owns it.  I had a hard time following this at first too.  
Every reference to TMD can trace it's roots back to the one created in SS.  TMD 
is *so close* to being a singleton.  I can't remember why I changed 
calculatePendingRanges() to retrieve it from the ARS instead of grabbing the SS 
member though.

4) What if I change this so that the message becomes: "send me bootstrap data 
for this list of tables" instead of: "for each table, send me bootstrap data."? 
 Then, as soon as a remote node is finished, it can send an indication and the 
bootstrapping node can remove the remote node from it's bootstrap set.  It 
seems like that would solve CASSANDRA-673 at the same time, correct?

Thanks for the feedback!

> Add per-keyspace replication factor (possibly even replication strategy)
> ------------------------------------------------------------------------
>
>                 Key: CASSANDRA-620
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-620
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Jonathan Ellis
>            Assignee: Gary Dusbabek
>             Fix For: 0.9
>
>         Attachments: 
> 0001-push-replication-factor-and-strategy-into-table-exce.patch, 
> 0002-cleaned-up-as-much-as-possible-before-dealing-with-r.patch, 
> 0003-push-table-names-into-streaming-expose-TMD-in-ARS.patch, 
> 0004-fix-non-compiling-tests.patch, 
> 0005-introduce-table-into-pending-ranges-code.patch, 
> 0006-added-additional-testing-keyspace.patch, 
> 0007-modify-TestRingCache-to-make-it-easier-to-test-speci.patch, 
> 0008-push-endpoint-snitch-into-keyspace-configuration.patch, 
> 0009-Marked-a-few-AntiEntropyServices-methods-as-private-.patch
>
>
> (but partitioner may only be cluster-wide, still)
> not 100% sure this makes sense but it would allow maintaining system metadata 
> in a replicated-across-entire-cluster keyspace (without ugly special casing), 
> as well as making Cassandra more flexible as a shared resource for multiple 
> apps

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to