[ 
https://issues.apache.org/jira/browse/IGNITE-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-15493:
-------------------------------------
    Issue Type: Task  (was: Improvement)

> Need to clarify changePeers behavior to support the case with only one 
> replica which should be moved during rebalance
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-15493
>                 URL: https://issues.apache.org/jira/browse/IGNITE-15493
>             Project: Ignite
>          Issue Type: Task
>            Reporter: Vyacheslav Koptilin
>            Priority: Major
>              Labels: ignite-3
>
> Need to clarify the behavior of changePeers (IGNITE-15288) method in case of 
> a raft group contains only one raw node and it should be moved to a new 
> Ignite node. 
> h4. Upd 1
> Following open points should be clarified:
>  * Is it possible to change peers for the case when the old and new sets of 
> raft nodes do not intersect? As Kirill Gusakov mentioned seems that it’s 
> possible, see ITJRaftCounterServerTest#testRebalance
>  * When changePeers() returns to the client? We assume that it returns after 
> appliance on both old and new raft group typologies. It’s also expected that 
> changePeers is a raft command that’s as any other raft commands have an index 
> and will be applied after commands with lower index, which actually means 
> that data rebalance to the majority of nodes within new topology will be 
> finished before applying change peers. Let’s check whether it’s true:
>  ** Let’s check whether dataRebalance is a raft command that works just as 
> any other raft commands and do not expect index gaps.
>  ** Let’s check that snapshot works the same way as log-rebalance in the 
> context of index moving.
>  ** If all true, how it’s possible to have changePeers invoke that will last 
> for “new majority time“ that may take hours or event days?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to