Hello!

While the node was down, the partitions that it previously owned had their
data updated.

At this point we only have two options:
- Throw out existing partitions and rebalance them. AFAIK it involves WAL
so it will take some time. I have heard that if you wipe node's persistence
then it won't use WAL during rebalancing, which should help a lot. However,
I'm not confident here.
- Use historical rebalance, where the node will try to use other nodes'
WALs to get its partitions up to speed. Should be pretty fast, at least if
the rate of change in the cluster is low. However, as far as I know it will
only be used under specific circumstances, maybe you didn't get lucky here.

Regards,
-- 
Ilya Kasnacheev


чт, 14 янв. 2021 г. в 20:02, maxi628 <[email protected]>:

> Sorry, I'm attaching the log here  ignite_eviction.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3058/ignite_eviction.log>
>
>
> I've read https://issues.apache.org/jira/browse/IGNITE-11974 and the thing
> is, this isn't an infinite loop.
> The remainingPartsToEvict=$something starts going down until it reaches 0,
> and that's when we consider the node completely up.
>
> My question is, it is expected for a node to try rebalance if it only went
> down for 2 minutes being part of a baseline topology with persistence
> enabled?
> All caches are Partitioned with 2 backups, and only 1 node is being
> restarted at a time.
> So shouldn't the other nodes with backups of the primary partitions of this
> node cover up for that node until it boots up again?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to