alievmirza commented on code in PR #6651:
URL: https://github.com/apache/ignite-3/pull/6651#discussion_r2410740499


##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DataNodesManager.java:
##########
@@ -258,7 +280,17 @@ CompletableFuture<Void> startAsync(
                         onScaleDownTimerChange(zone, scaleDownTimer);
                         restorePartitionResetTimer(zone.id(), scaleDownTimer, 
recoveryRevision);
                     }
-                });
+                })
+                .thenCompose(unused -> 
allOf(legacyInitFutures.stream().map(IgniteBiTuple::getValue).collect(toList()))
+                        .handle((v, e) -> {
+                            if (e != null) {
+                                LOG.error("Could not recover legacy data nodes 
for zone.", e);

Review Comment:
   What should user do in that case? Should be Failure Handler called? Could 
the system work properly in that case?  



##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DataNodesManager.java:
##########
@@ -223,21 +234,32 @@ CompletableFuture<Void> startAsync(
             allKeys.add(zoneScaleUpTimerKey(zone.id()));
             allKeys.add(zoneScaleDownTimerKey(zone.id()));
             allKeys.add(zonePartitionResetTimerKey(zone.id()));
+            allKeys.add(zoneDataNodesKey(zone.id()));
 
             descriptors.put(zone.id(), zone);
         }
 
+        List<IgniteBiTuple<CatalogZoneDescriptor, CompletableFuture<?>>> 
legacyInitFutures = new ArrayList<>();

Review Comment:
   why do you need tuple here if you only use getValue? 



##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DistributionZoneManager.java:
##########
@@ -378,6 +379,14 @@ public CompletableFuture<Set<String>> 
dataNodes(HybridTimestamp timestamp, int c
         return dataNodesManager.dataNodes(zoneId, timestamp, catalogVersion);
     }
 
+    public static Set<Node> dataNodes(Map<Node, Integer> dataNodesMap) {
+        return dataNodesMap.entrySet().stream().filter(e -> e.getValue() > 
0).map(Map.Entry::getKey).collect(toSet());
+    }
+
+    public static Set<Node> parseDataNodes(byte[] dataNodesBytes) {

Review Comment:
   this is suspicious, this method is not used 



##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DataNodesMapSerializer.java:
##########
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.distributionzones;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import 
org.apache.ignite.internal.distributionzones.DataNodesHistory.DataNodesHistorySerializer;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.internal.util.io.IgniteDataInput;
+import org.apache.ignite.internal.util.io.IgniteDataOutput;
+import org.apache.ignite.internal.versioned.VersionedSerialization;
+import org.apache.ignite.internal.versioned.VersionedSerializer;
+
+/**
+ * {@link VersionedSerializer} for data nodes' maps. Deprecated, {@link 
DataNodesHistorySerializer} should be used instead,
+ * preserved for backward compatibility.
+ */
+@Deprecated
+public class DataNodesMapSerializer extends VersionedSerializer<Map<Node, 
Integer>> {
+    /** Serializer instance. */
+    public static final DataNodesMapSerializer INSTANCE = new 
DataNodesMapSerializer();
+
+    private final NodeSerializer nodeSerializer = NodeSerializer.INSTANCE;
+
+    @Override
+    protected void writeExternalData(Map<Node, Integer> map, IgniteDataOutput 
out) throws IOException {
+        throw new UnsupportedOperationException("Data nodes map is a legacy 
structure that should be not used anymore.");

Review Comment:
   should not be 



##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DataNodesManager.java:
##########
@@ -1100,6 +1164,30 @@ CompletableFuture<?> onZoneCreate(
             int zoneId,
             HybridTimestamp timestamp,
             Set<NodeWithAttributes> dataNodes
+    ) {
+        return initZone(zoneId, timestamp, dataNodes);
+    }
+
+    private CompletableFuture<?> 
initZoneWithLegacyDataNodes(CatalogZoneDescriptor zone, byte[] 
legacyDataNodesBytes) {
+        Set<NodeWithAttributes> dataNodes = 
DataNodesMapSerializer.deserialize(legacyDataNodesBytes).keySet().stream()
+                .map(node -> new NodeWithAttributes(node.nodeName(), 
node.nodeId(), null))

Review Comment:
   What about user attributes? How data nodes filtering is supposed to be 
working in that case? 



##########
modules/distribution-zones/src/main/java/org/apache/ignite/internal/distributionzones/DataNodesManager.java:
##########
@@ -992,6 +1028,21 @@ private CompletableFuture<DataNodesHistoryContext> 
getDataNodeHistoryContextMsLo
         return completedFuture(dataNodeHistoryContextFromValues(entries));
     }
 
+    private CompletableFuture<DataNodesHistoryContext> 
ensureContextIsPresentAndInitZoneIfNeeded(
+            @Nullable DataNodesHistoryContext context,
+            List<ByteArray> keys,
+            int zoneId
+    ) {
+        if (context == null) {
+            // Probably this is a transition from older version of cluster, 
need to initialize zone according to the
+            // current set of meta storage entries.
+            return initZone(zoneId)

Review Comment:
   Shouldn't we remove old data nodes key once we restored old data nodes? What 
will happen when cluster will restart next time? Won't we take the old data 
nodes again? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to