[ 
https://issues.apache.org/jira/browse/CASSANDRA-195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12742883#action_12742883
 ] 

Jonathan Ellis commented on CASSANDRA-195:
------------------------------------------

Thanks!

> did this so that the op doesn't have to worry about re-starting correctly in 
> bootstrap mode if the node died during bootstrap and got restarted.

I'm really -1 on trying to be clever and second-guessing the op.  It just leads 
to confusion, e.g. when we had the CF definitions stored locally as well as in 
the xml -- it seemed like adding new CFs to the xml should Just Work but it 
didn't because Cassandra was outsmarting you.

+ if (StorageService.instance().isBootstrapMode())
+ {
+ logger_.error("Cannot bootstrap another node: I'm in bootstrap mode myself!");
+ return;
+ } 

still needs to be replaced w/ assert.

+    public static String rename(String tmpFilename)

why move this to SST and make it public?  SSTW is the only user, it should stay 
there private.

+    public static synchronized SSTableReader renameAndOpen(String 
dataFileName) throws IOException

I don't think this doesn't need to be synchronized -- calling it on different 
args doesn't need it, and calling it twice on the same args is erroneous.

+        boolean bootstrap = false;
+        if (bs != null && bs.contains("true"))
+            bootstrap = true;

better: 
boolean bootstrap = bs != null && bs.contains("true")

     public StorageService()

should be removed

in the endpoint-finding code:
+                       
if(endPoint.equals(StorageService.getLocalStorageEndPoint()) && 
!isBootstrapMode)

the extra check should be unnecessary, since we shouldn't be looking up 
endpoints at all in bootstrap mode, right?

-                       if ( 
StorageService.instance().isInSameDataCenter(endpoints[j]) && 
FailureDetector.instance()
+                       if ( 
StorageService.instance().isInSameDataCenter(endpoints[j]) && 
FailureDetector.instance()
+                               && 
!tokenMetadata_.isBootstrapping(endpoints[j]))

I don't think this is quite right.  Introducing the new node into the right 
place on the ring, but then trying to special case it to not get used, seems 
problematic.  (What if you only have a replication factor of one?  Then the 
data will just disappear until boostrap is complete.)

Can we instead not introduce the bootstraping node into the ring until it is 
done?  And have the nodes providing data to it, echo writes in the bootstrap 
range over?


> Improve bootstrap algorithm
> ---------------------------
>
>                 Key: CASSANDRA-195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-195
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>         Environment: all
>            Reporter: Sandeep Tata
>            Assignee: Sandeep Tata
>             Fix For: 0.5
>
>         Attachments: 195-v1.patch, 195-v2.patch, 195-v3.patch
>
>
> When you add a node to an existing cluster and the map gets updated, the new 
> node may respond to read requests by saying it doesn't have any of the data 
> until it gets the data from the node(s) the previously owned this range (the 
> load-balancing code, when working properly can take care of this). While this 
> behaviour is compatible with eventual consistency, it would be much 
> friendlier for the new node not to "surface" in the EndPoint maps for reads 
> until it has transferred the data over from the old nodes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to