Hello Jason, 1) Yes, data (cache entry) distributed in partition, depends on hash code of key (key.hashCode() % parts).
2) The basis for the rebalancing of partitions are leave node from cluster or join a node to cluster. RendezvousAffinityFunction does so that each node has roughly the same count of partitions. Also (RendezvousAffinityFunction) partition distribution as less as possible difference than the previous. Thus we can assume (in case the hash code has uniform distribution) that each node has about a different part of the data. On Mon, Jul 25, 2016 at 8:41 AM, Jason <[email protected]> wrote: > hi Ignite team, > > 1. Ignite does the data balance based on the cache partitions' balance not > the data, right? E.g. if there's data skew in some keys, how to handle? > > 2. RendezvousAffinityFunction cannot ensure all the nodes have the > completely same partitions and this should be an advantage to reduce the > impact of network fluctuations, such as Node leave/join, but how to control > the unbalance rate of all the nodes, say partition # in the node with max > partitions / that of min? If the max node reaches the memory limit, will it > do re-balance? Is there any config to control this, say 10%, 20%? > > Thanks, > -Jason > > > > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/How-to-control-the-data-unbalance-among-all-the-server-nodes-tp6502.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. > -- Vladislav Pyatkov
