Thanks Stan and Ivan, 

We did not use the Native persistence,  and the Ignite is 2.4.0

We will try both those argument to check whether can find something.  in more 
logger we can find in fact the nodes find each others.  if I use the 
RendezvousAffinityFunction manually check it should can work. 

ClusterGroup cg = ignite.cluster().forPredicate(filter);

final List<ClusterNode> nodes = new ArrayList<>(cg.nodes());

nodes.forEach(it -> logger.warn("[BIZ] SIBLING_NODES {},{},{},{}", 
it.attribute("TH.IG.INS.NAME"), it.addresses(), it.isLocal(), 
it.consistentId()));

final RendezvousAffinityFunction function = new 
RendezvousAffinityFunction().setPartitions(partitionSize);
for (int i = 0; i < partitionSize; i++) {
    List<ClusterNode> partAssignment = function.assignPartition(i, nodes, 0, 
null);
    logger.warn("[BIZ] PRIMARY_NODE {},{},{},{}", i, 
partAssignment.get(0).isLocal(), 
partAssignment.get(0).attribute("TH.IG.INS.NAME"), 
partAssignment.get(0).consistentId());
}

SIBLING_NODES BOOK-FX-A-Instance1,[10.30.91.137],true,10.30.91.137:47500
SIBLING_NODES BOOK-FX-B-Instance1,[10.25.46.87],false,10.25.46.87:47500

PRIMARY_NODE 0,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 1,[10.25.46.87],BOOK-FX-B-Instance1,10.25.46.87:47500
PRIMARY_NODE 2,[10.25.46.87],BOOK-FX-B-Instance1,10.25.46.87:47500
PRIMARY_NODE 3,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 4,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 5,[10.25.46.87],BOOK-FX-B-Instance1,10.25.46.87:47500
PRIMARY_NODE 6,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 7,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 8,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 9,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 10,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 11,[10.25.46.87],BOOK-FX-B-Instance1,10.25.46.87:47500
PRIMARY_NODE 12,[10.25.46.87],BOOK-FX-B-Instance1,10.25.46.87:47500
PRIMARY_NODE 13,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 14,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
PRIMARY_NODE 15,[10.30.91.137],BOOK-FX-A-Instance1,10.30.91.137:47500
REGULAR_IGNITE_MONITOR_NORMAL BK_DATA_CACHE:16/16:[0, 1, 2, 3, 4, 5, 6, 7, 8, 
9, 10, 11, 12, 13, 14, 15]


Regards
Aaron


Aaron.Kuai
 
From: Stanislav Lukyanov
Date: 2018-03-22 19:59
To: user@ignite.apache.org
Subject: RE: Is the partition cache re-balance at once a new Node join/left?
Hi Aaron,
 
To add to what Ivan’ve said about baseline topology, there is a general setting 
for the time between a topology change and rebalancing start - 
CacheConfiguration.rebalanceDelay. 
See Javadoc for details: 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#getRebalanceDelay--
 
Thanks,
Stan
 
From: Ivan Rakov
Sent: 22 марта 2018 г. 14:12
To: user@ignite.apache.org
Subject: Re: Is the partition cache re-balance at once a new Node join/left?
 
Hi Aaron,
Which version of Ignite do you use? Is Ignite Native Persistence enabled on 
your node?
Since 2.4, in persistent mode partitions are mapped to nodes according to 
Baseline Topology. You may need to set new Baseline Topology of your new set of 
nodes in order to trigger rebalancing. 
Read more: https://apacheignite.readme.io/docs/cluster-activation
Best Regards,
Ivan Rakov
On 22.03.2018 14:02, aa...@tophold.com wrote:
Hi all, 
 
We had a partition cache with backup as zero, but we found sometime even a new 
nodes join, the cache's partition not re-balance. 
 
one of the node may contain all the primary partition, while other may have 
none partition.
 
so will the re-balance happen under a specific condition?  cache configure as:
 
<bean class="org.apache.ignite.configuration.CacheConfiguration" 
id="ExampleCache">

        <property name="name" value="ExampleCache"/>
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="ATOMIC"/>

        <property name="affinity">
            <bean 
class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
                <property name="partitions" value="32"/>
            </bean>
        </property>
        <property name="nodeFilter">
            <bean class="org.apache.ignite.util.AttributeNodeFilter">
                <constructor-arg name="attrName"
                                 value="Domain"/>
                <constructor-arg name="attrVal" value="Product"/>
            </bean>
        </property>
</bean>


There are two nodes with attributes Domain as Product.  While after ignite 
started we have a back-end thread to check on each node every one min:
 
ignite.affinity("ExampleCache").primaryPartitions(ignite.cluster().localNode()) 
And found one of the node always include the totally 32 partitions. and then 
forever the re-balance not happen,  if we use this filter to scan the cluster:
 ignite.cluster().forPredicate(new 
org.apache.ignite.util.AttributeNodeFilter("Domain", "Product")) We can find 
there are two nodes there, this cache volume only about hundreds, is there any 
configuration we may missed? 
 
 
Regards
Aaron
Aaron.Kuai
 
 

Reply via email to