alex-plekhanov commented on a change in pull request #8668:
URL: https://github.com/apache/ignite/pull/8668#discussion_r566158049



##########
File path: 
modules/core/src/main/java/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeColocatedBackupFilter.java
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.cache.affinity.rendezvous;
+
+import java.util.List;
+import java.util.Objects;
+import org.apache.ignite.cluster.ClusterNode;
+import org.apache.ignite.lang.IgniteBiPredicate;
+
+/**
+ * This class can be used as a {@link 
RendezvousAffinityFunction#affinityBackupFilter } to create
+ * cache templates in Spring that force each partition's primary and backup to 
be co-located on nodes with the same
+ * attribute value.
+ * <p>
+ *
+ * Partition copies co-location can be helpful to group nodes into cells when 
fixed baseline topology is used. If all
+ * copies of each partition are located inside only one cell, in case of 
{@code backup + 1} nodes leave the cluster
+ * there will be data lost only if all leaving nodes belong to the same cell. 
Without partition copies co-location
+ * within a cell, most probably there will be data lost if any {@code backup + 
1} nodes leave the cluster.
+ *
+ * Note: Baseline topology change can lead to inter-cell partitions migration, 
i.e. rebalance can affect all copies
+ * of some partitions even if only one node is changed in the baseline 
topology.
+ * <p>
+ *
+ * This implementation will discard backups rather than place copies on nodes 
with different attribute values. This
+ * avoids trying to cram more data onto remaining nodes when some have failed.
+ * <p>
+ * A node attribute to compare is provided on construction.
+ *
+ * Note: All cluster nodes, on startup, automatically register all the 
environment and system properties as node
+ * attributes.
+ *
+ * Note: Node attributes persisted in baseline topology at the time of 
baseline topology change. If the co-location
+ * attribute of some node was updated, but the baseline topology wasn't 
changed, the outdated attribute value can be
+ * used by the backup filter when this node left the cluster. To avoid this, 
the baseline topology should be updated
+ * after changing the co-location attribute.
+ * <p>
+ * This class is constructed with a node attribute name, and a candidate node 
will be rejected if previously selected
+ * nodes for a partition have a different value for attribute on the candidate 
node.
+ * </pre>
+ * <h2 class="header">Spring Example</h2>
+ * Create a partitioned cache template plate with 1 backup, where the backup 
will be placed in the same cell
+ * as the primary.   Note: This example requires that the environment variable 
"CELL" be set appropriately on
+ * each node via some means external to Ignite.
+ * <pre name="code" class="xml">
+ * &lt;property name="cacheConfiguration"&gt;
+ *     &lt;list&gt;
+ *         &lt;bean id="cache-template-bean" abstract="true" 
class="org.apache.ignite.configuration.CacheConfiguration"&gt;
+ *             &lt;property name="name" value="JobcaseDefaultCacheConfig*"/&gt;
+ *             &lt;property name="cacheMode" value="PARTITIONED" /&gt;
+ *             &lt;property name="backups" value="1" /&gt;
+ *             &lt;property name="affinity"&gt;
+ *                 &lt;bean 
class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction"&gt;
+ *                     &lt;property name="affinityBackupFilter"&gt;
+ *                         &lt;bean 
class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeColocatedBackupFilter"&gt;
+ *                             &lt;!-- Backups must go to the same CELL as 
primary --&gt;
+ *                             &lt;constructor-arg value="CELL" /&gt;
+ *                         &lt;/bean&gt;
+ *                     &lt;/property&gt;
+ *                 &lt;/bean&gt;
+ *             &lt;/property&gt;
+ *         &lt;/bean&gt;
+ *     &lt;/list&gt;
+ * &lt;/property&gt;
+ * </pre>
+ * <p>
+ */
+public class ClusterNodeAttributeColocatedBackupFilter implements 
IgniteBiPredicate<ClusterNode, List<ClusterNode>> {
+    /** */
+    private static final long serialVersionUID = 1L;
+
+    /** Attribute name. */
+    private final String attrName;
+
+    /**
+     * @param attrName The attribute name for the attribute to compare.
+     */
+    public ClusterNodeAttributeColocatedBackupFilter(String attrName) {
+        this.attrName = attrName;
+    }
+
+    /**
+     * Defines a predicate which returns {@code true} if a node is acceptable 
for a backup
+     * or {@code false} otherwise. An acceptable node is one where its 
attribute value
+     * is exact match with previously selected nodes.  If an attribute does not

Review comment:
       Of course, we should note about avoiding such misconfiguration in 
javadoc, but, I think, delayed cluster failure is even worse than ruining the 
cell. It's not necessary to reload the data in case of broken data placement, 
data will be rebalanced after fixing the configuration. AFAIK node joining 
without cluster deactivation and baseline change it's a common case, in this 
case, there will be no affinity recalculation and node failure due to failure 
handler, but nodes will fail when the coordinator is suddenly changed. 
Coordinator failure is unpredictable (it can be hardware or software failure), 
and the whole cluster failure caused by one node failure it's a very 
undesirable case. Also, there are other cases exist. For example, if there was 
no attribute defined for some nodes in baseline, but it was defined for online 
nodes, when such nodes left the grid and coordinator changed there will be the 
whole cluster failure again. So, if it's happened during high load hours I 
think it
 's better to have unexpected rebalancing than unexpected cluster failure. WDYT?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to