[ 
https://issues.apache.org/jira/browse/CASSANDRA-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17420759#comment-17420759
 ] 

Brandon Williams commented on CASSANDRA-16998:
----------------------------------------------

Resolving as duplicate, but I think -Dcassandra.skip_schema_check=true should 
get around this.

> replace_address does not work in 3.11.10
> ----------------------------------------
>
>                 Key: CASSANDRA-16998
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16998
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Sean Fulton
>            Priority: Normal
>
> We have a 30 node setup with four DCs. In one DC we had a failed node 
> (cass04). We built a new node, same version of cass. Same rackdc as the 
> failed node, used the same IP as the failed node, and added 
> replace_address=<ip of cass04>.
> The node got to joining, then exited with something about can't contact any 
> seeds. All of the seed nodes had the following in their logs:
> WARN [MigrationStage:1] 2021-09-27 09:46:34,806 MigrationCoordinator.java:426 
> - Can't send schema pull request: node /10.10.4.124 is down.
> I watched the failuredetector on the seed nodes and it went to zero when the 
> new cass04 started coming up, so they knew it was up. My guess is they were 
> refusing to send because gossip said cass04 was down.
> I tried changing the IP to a different IP and used replace_address with the 
> IP of the failed node, and the replacement node kept complaining that it 
> could not get the schema from the failed node. It seems this has been fixed 
> in 3.11.11
> So in this situation, what's the best way to replace a failed node in 
> 3.11.10? nodetool removenode of the dead node?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to