sanpwc commented on a change in pull request #379:
URL: https://github.com/apache/ignite-3/pull/379#discussion_r723949123
##########
File path: modules/raft/new.sh
##########
@@ -0,0 +1,2 @@
+mvn -Dtest=ITJRaftCounterServerTest#testRebalance test &>
/Users/kgusakov/tmp/log_${1}.txt
Review comment:
What is it?
> /Users/kgusakov/
##########
File path:
modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
##########
@@ -50,6 +51,9 @@ public QueryProcessor queryEngine() {
return null;
}
+ @Override public void setBaseline(Set<String> baselineNodes) {
+ }
Review comment:
Should we throw OperationNotSupportedException here?
##########
File path:
modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
##########
@@ -50,6 +51,9 @@ public QueryProcessor queryEngine() {
return null;
}
+ @Override public void setBaseline(Set<String> baselineNodes) {
+ }
Review comment:
Should we throw OperationNotSupportedException here?
javadoc
##########
File path: modules/api/src/main/java/org/apache/ignite/Ignite.java
##########
@@ -44,4 +45,11 @@
* @return Ignite transactions.
*/
IgniteTransactions transactions();
+
+ /**
+ * Set new baseline nodes for table assignments.
+ *
+ * @param baselineNodes Names of baseline nodes.
+ */
+ void setBaseline(Set<String> baselineNodes);
Review comment:
What about using @Experimental here with a link to a table groups
specific baseline ticket/epic?
##########
File path:
modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
##########
@@ -101,6 +102,9 @@ private TcpIgniteClient(
return null;
}
+ @Override public void setBaseline(Set<String> baselineNodes) {
Review comment:
Should we throw OperationNotSupportedException here?
javadoc
##########
File path:
modules/metastorage-client/src/integrationTest/java/org/apache/ignite/internal/metastorage/client/ITMetaStorageServiceTest.java
##########
@@ -801,6 +801,7 @@ public void testCursorsCleanup() throws Exception {
cluster.get(1),
FACTORY,
10_000,
+ 10_000,
Review comment:
As was discussed let's hide given property to the new temporary
overloaded method if any better solution isn't possible.
##########
File path: modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
##########
@@ -43,6 +45,9 @@
* Best raft manager ever since 1982.
*/
public class Loza implements IgniteComponent {
+
+ private static final IgniteLogger LOG =
IgniteLogger.forClass(IgniteLogger.class);
Review comment:
javadoc
##########
File path: modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
##########
@@ -55,7 +60,10 @@
private static final int CLIENT_POOL_SIZE = Math.min(Utils.cpus() * 3, 20);
/** Timeout. */
- private static final int TIMEOUT = 1000;
+ private static final int TIMEOUT = 10000;
Review comment:
Why we need such a big timeout? As was discussed let's try smaller one
for all cases except changePeers, if it doesn't work lets add todo with
corresponding ticket for timeouts research and consolidation.
##########
File path: modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
##########
@@ -111,7 +119,9 @@ public Loza(ClusterService clusterNetSvc, Path dataPath) {
public CompletableFuture<RaftGroupService> prepareRaftGroup(
Review comment:
Let's make it temporary with the help of proper annotations and todos.
javadoc for new parameters is also missing.
##########
File path: modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
##########
@@ -125,14 +135,23 @@ public Loza(ClusterService clusterNetSvc, Path dataPath) {
groupId,
clusterNetSvc,
FACTORY,
- TIMEOUT,
+ clientTimeout,
+ networkTimeout,
peers,
true,
DELAY,
executor
);
}
+ public CompletableFuture<RaftGroupService> prepareRaftGroup(
Review comment:
javadoc
##########
File path:
modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
##########
@@ -73,6 +73,8 @@
/** */
private volatile long timeout;
+ private final long networkTimeout;
Review comment:
javadoc
##########
File path:
modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
##########
@@ -487,7 +492,9 @@ else if (resp0.errorCode() == RaftError.EBUSY.getNumber() ||
return null;
}, retryDelay, TimeUnit.MILLISECONDS);
}
- else if (resp0.errorCode() == RaftError.EPERM.getNumber())
{
+ else if (resp0.errorCode() == RaftError.EPERM.getNumber()
||
+ resp0.errorCode() == RaftError.UNKNOWN.getNumber() ||
Review comment:
What are the reasons for an UNKNOWN and EINTERNAL erros? Why should we
retry requests in that cases?
##########
File path:
modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ITBaselineChangesTest.java
##########
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.runner.app;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import com.google.common.collect.Lists;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import
org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.table.RecordView;
+import org.apache.ignite.table.Table;
+import org.apache.ignite.table.Tuple;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import static
org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+/**
+ * Test for baseline changes
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ITBaselineChangesTest {
+ /** Start network port for test nodes. */
+ private static final int BASE_PORT = 3344;
+
+ /** Nodes bootstrap configuration. */
+ private final Map<String, String> initClusterNodes = new LinkedHashMap<>();
+
+ /** */
+ private final List<Ignite> clusterNodes = new ArrayList<>();
+
+ /** */
+ @WorkDirectory
+ private Path workDir;
+
+ /** */
+ @BeforeEach
+ void setUp(TestInfo testInfo) {
+ String node0Name = testNodeName(testInfo, BASE_PORT);
+ String node1Name = testNodeName(testInfo, BASE_PORT + 1);
+ String node2Name = testNodeName(testInfo, BASE_PORT + 2);
+
+ initClusterNodes.put(
+ node0Name,
+ buildConfig(node0Name, 0)
+ );
+
+ initClusterNodes.put(
+ node1Name,
+ buildConfig(node0Name, 1)
+ );
+
+ initClusterNodes.put(
+ node2Name,
+ buildConfig(node0Name, 2)
+ );
+ }
+
+ /** */
+ @AfterEach
+ void tearDown() throws Exception {
+ IgniteUtils.closeAll(Lists.reverse(clusterNodes));
+ }
+
+ /**
+ * Check dynamic table creation.
+ */
+ @Test
+ void testBaselineExtending(TestInfo testInfo) {
+ initClusterNodes.forEach((nodeName, configStr) ->
+ clusterNodes.add(IgnitionManager.start(nodeName, configStr,
workDir.resolve(nodeName)))
+ );
+
+ assertEquals(3, clusterNodes.size());
+
+ // Create table on node 0.
+ TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC",
"tbl1").columns(
+ SchemaBuilders.column("key", ColumnType.INT64).asNonNull().build(),
+ SchemaBuilders.column("val", ColumnType.INT32).asNullable().build()
+ ).withPrimaryKey("key").build();
+
+ clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(),
tblCh ->
+ SchemaConfigurationConverter.convert(schTbl1, tblCh)
+ .changeReplicas(5)
+ .changePartitions(1)
+ );
+
+ // Put data on node 1.
+ Table tbl1 =
clusterNodes.get(1).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView1 = tbl1.recordView();
+
+ recView1.insert(Tuple.create().set("key", 1L).set("val", 111));
+
+ // Get data on node 2.
+ Table tbl2 =
clusterNodes.get(2).tables().table(schTbl1.canonicalName());
Review comment:
Why we need get?
##########
File path:
modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ITBaselineChangesTest.java
##########
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.runner.app;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import com.google.common.collect.Lists;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import
org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.table.RecordView;
+import org.apache.ignite.table.Table;
+import org.apache.ignite.table.Tuple;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import static
org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+/**
+ * Test for baseline changes
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ITBaselineChangesTest {
+ /** Start network port for test nodes. */
+ private static final int BASE_PORT = 3344;
+
+ /** Nodes bootstrap configuration. */
+ private final Map<String, String> initClusterNodes = new LinkedHashMap<>();
+
+ /** */
+ private final List<Ignite> clusterNodes = new ArrayList<>();
+
+ /** */
+ @WorkDirectory
+ private Path workDir;
+
+ /** */
+ @BeforeEach
+ void setUp(TestInfo testInfo) {
+ String node0Name = testNodeName(testInfo, BASE_PORT);
+ String node1Name = testNodeName(testInfo, BASE_PORT + 1);
+ String node2Name = testNodeName(testInfo, BASE_PORT + 2);
+
+ initClusterNodes.put(
+ node0Name,
+ buildConfig(node0Name, 0)
+ );
+
+ initClusterNodes.put(
+ node1Name,
+ buildConfig(node0Name, 1)
+ );
+
+ initClusterNodes.put(
+ node2Name,
+ buildConfig(node0Name, 2)
+ );
+ }
+
+ /** */
+ @AfterEach
+ void tearDown() throws Exception {
+ IgniteUtils.closeAll(Lists.reverse(clusterNodes));
+ }
+
+ /**
+ * Check dynamic table creation.
+ */
+ @Test
+ void testBaselineExtending(TestInfo testInfo) {
+ initClusterNodes.forEach((nodeName, configStr) ->
+ clusterNodes.add(IgnitionManager.start(nodeName, configStr,
workDir.resolve(nodeName)))
+ );
+
+ assertEquals(3, clusterNodes.size());
+
+ // Create table on node 0.
+ TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC",
"tbl1").columns(
+ SchemaBuilders.column("key", ColumnType.INT64).asNonNull().build(),
+ SchemaBuilders.column("val", ColumnType.INT32).asNullable().build()
+ ).withPrimaryKey("key").build();
+
+ clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(),
tblCh ->
+ SchemaConfigurationConverter.convert(schTbl1, tblCh)
+ .changeReplicas(5)
+ .changePartitions(1)
+ );
+
+ // Put data on node 1.
+ Table tbl1 =
clusterNodes.get(1).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView1 = tbl1.recordView();
+
+ recView1.insert(Tuple.create().set("key", 1L).set("val", 111));
+
+ // Get data on node 2.
+ Table tbl2 =
clusterNodes.get(2).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView2 = tbl2.recordView();
+
+ final Tuple keyTuple1 = Tuple.create().set("key", 1L);
+
+ assertEquals(1, (Long)recView2.get(keyTuple1).value("key"));
+
+ var metaStoreNode = clusterNodes.get(0);
+
+ var node3Name = testNodeName(testInfo, nodePort(3));
+ var node4Name = testNodeName(testInfo, nodePort(4));
+
+ // Start 2 new nodes after
+ var node3 = IgnitionManager.start(
+ node3Name, buildConfig(metaStoreNode.name(), 3),
workDir.resolve(node3Name));
+ var node4 = IgnitionManager.start(
Review comment:
Why not to add node to startNodes right after start? Seems that within
current approach we won't stop node3 in case of failure during node4 creation.
##########
File path:
modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ITBaselineChangesTest.java
##########
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.runner.app;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import com.google.common.collect.Lists;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import
org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.table.RecordView;
+import org.apache.ignite.table.Table;
+import org.apache.ignite.table.Tuple;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import static
org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+/**
+ * Test for baseline changes
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ITBaselineChangesTest {
+ /** Start network port for test nodes. */
+ private static final int BASE_PORT = 3344;
+
+ /** Nodes bootstrap configuration. */
+ private final Map<String, String> initClusterNodes = new LinkedHashMap<>();
+
+ /** */
+ private final List<Ignite> clusterNodes = new ArrayList<>();
+
+ /** */
+ @WorkDirectory
+ private Path workDir;
+
+ /** */
+ @BeforeEach
+ void setUp(TestInfo testInfo) {
+ String node0Name = testNodeName(testInfo, BASE_PORT);
+ String node1Name = testNodeName(testInfo, BASE_PORT + 1);
+ String node2Name = testNodeName(testInfo, BASE_PORT + 2);
+
+ initClusterNodes.put(
+ node0Name,
+ buildConfig(node0Name, 0)
+ );
+
+ initClusterNodes.put(
+ node1Name,
+ buildConfig(node0Name, 1)
+ );
+
+ initClusterNodes.put(
+ node2Name,
+ buildConfig(node0Name, 2)
+ );
+ }
+
+ /** */
+ @AfterEach
+ void tearDown() throws Exception {
+ IgniteUtils.closeAll(Lists.reverse(clusterNodes));
+ }
+
+ /**
+ * Check dynamic table creation.
+ */
+ @Test
+ void testBaselineExtending(TestInfo testInfo) {
+ initClusterNodes.forEach((nodeName, configStr) ->
+ clusterNodes.add(IgnitionManager.start(nodeName, configStr,
workDir.resolve(nodeName)))
+ );
+
+ assertEquals(3, clusterNodes.size());
+
+ // Create table on node 0.
+ TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC",
"tbl1").columns(
+ SchemaBuilders.column("key", ColumnType.INT64).asNonNull().build(),
+ SchemaBuilders.column("val", ColumnType.INT32).asNullable().build()
+ ).withPrimaryKey("key").build();
+
+ clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(),
tblCh ->
+ SchemaConfigurationConverter.convert(schTbl1, tblCh)
+ .changeReplicas(5)
+ .changePartitions(1)
+ );
+
+ // Put data on node 1.
+ Table tbl1 =
clusterNodes.get(1).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView1 = tbl1.recordView();
+
+ recView1.insert(Tuple.create().set("key", 1L).set("val", 111));
+
+ // Get data on node 2.
+ Table tbl2 =
clusterNodes.get(2).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView2 = tbl2.recordView();
+
+ final Tuple keyTuple1 = Tuple.create().set("key", 1L);
+
+ assertEquals(1, (Long)recView2.get(keyTuple1).value("key"));
+
+ var metaStoreNode = clusterNodes.get(0);
+
+ var node3Name = testNodeName(testInfo, nodePort(3));
+ var node4Name = testNodeName(testInfo, nodePort(4));
+
+ // Start 2 new nodes after
+ var node3 = IgnitionManager.start(
+ node3Name, buildConfig(metaStoreNode.name(), 3),
workDir.resolve(node3Name));
+ var node4 = IgnitionManager.start(
+ node4Name, buildConfig(metaStoreNode.name(), 4),
workDir.resolve(node4Name));
+
+ clusterNodes.add(node3);
+ clusterNodes.add(node4);
+
+ // Update baseline to nodes 1,4,5
+ metaStoreNode.setBaseline(Set.of(metaStoreNode.name(), node3Name,
node4Name));
+
+ IgnitionManager.stop(clusterNodes.get(1).name());
Review comment:
Should we also remove nodes 1 and 2 from cluster nodes?
##########
File path:
modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ITBaselineChangesTest.java
##########
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.runner.app;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import com.google.common.collect.Lists;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import
org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.table.RecordView;
+import org.apache.ignite.table.Table;
+import org.apache.ignite.table.Tuple;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import static
org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+/**
+ * Test for baseline changes
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ITBaselineChangesTest {
+ /** Start network port for test nodes. */
+ private static final int BASE_PORT = 3344;
+
+ /** Nodes bootstrap configuration. */
+ private final Map<String, String> initClusterNodes = new LinkedHashMap<>();
+
+ /** */
+ private final List<Ignite> clusterNodes = new ArrayList<>();
+
+ /** */
+ @WorkDirectory
+ private Path workDir;
+
+ /** */
+ @BeforeEach
+ void setUp(TestInfo testInfo) {
+ String node0Name = testNodeName(testInfo, BASE_PORT);
+ String node1Name = testNodeName(testInfo, BASE_PORT + 1);
+ String node2Name = testNodeName(testInfo, BASE_PORT + 2);
+
+ initClusterNodes.put(
+ node0Name,
+ buildConfig(node0Name, 0)
+ );
+
+ initClusterNodes.put(
+ node1Name,
+ buildConfig(node0Name, 1)
+ );
+
+ initClusterNodes.put(
+ node2Name,
+ buildConfig(node0Name, 2)
+ );
+ }
+
+ /** */
+ @AfterEach
+ void tearDown() throws Exception {
+ IgniteUtils.closeAll(Lists.reverse(clusterNodes));
+ }
+
+ /**
+ * Check dynamic table creation.
+ */
+ @Test
+ void testBaselineExtending(TestInfo testInfo) {
+ initClusterNodes.forEach((nodeName, configStr) ->
+ clusterNodes.add(IgnitionManager.start(nodeName, configStr,
workDir.resolve(nodeName)))
+ );
+
+ assertEquals(3, clusterNodes.size());
+
+ // Create table on node 0.
+ TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC",
"tbl1").columns(
+ SchemaBuilders.column("key", ColumnType.INT64).asNonNull().build(),
+ SchemaBuilders.column("val", ColumnType.INT32).asNullable().build()
+ ).withPrimaryKey("key").build();
+
+ clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(),
tblCh ->
+ SchemaConfigurationConverter.convert(schTbl1, tblCh)
+ .changeReplicas(5)
+ .changePartitions(1)
+ );
+
+ // Put data on node 1.
+ Table tbl1 =
clusterNodes.get(1).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView1 = tbl1.recordView();
+
+ recView1.insert(Tuple.create().set("key", 1L).set("val", 111));
+
+ // Get data on node 2.
+ Table tbl2 =
clusterNodes.get(2).tables().table(schTbl1.canonicalName());
+ RecordView<Tuple> recView2 = tbl2.recordView();
+
+ final Tuple keyTuple1 = Tuple.create().set("key", 1L);
+
+ assertEquals(1, (Long)recView2.get(keyTuple1).value("key"));
+
+ var metaStoreNode = clusterNodes.get(0);
+
+ var node3Name = testNodeName(testInfo, nodePort(3));
+ var node4Name = testNodeName(testInfo, nodePort(4));
+
+ // Start 2 new nodes after
+ var node3 = IgnitionManager.start(
+ node3Name, buildConfig(metaStoreNode.name(), 3),
workDir.resolve(node3Name));
+ var node4 = IgnitionManager.start(
+ node4Name, buildConfig(metaStoreNode.name(), 4),
workDir.resolve(node4Name));
+
+ clusterNodes.add(node3);
+ clusterNodes.add(node4);
+
+ // Update baseline to nodes 1,4,5
+ metaStoreNode.setBaseline(Set.of(metaStoreNode.name(), node3Name,
node4Name));
+
+ IgnitionManager.stop(clusterNodes.get(1).name());
+ IgnitionManager.stop(clusterNodes.get(2).name());
+
+ Table tbl4 = node4.tables().table(schTbl1.canonicalName());
+
+ assertEquals(1, (Long) tbl4.recordView().get(keyTuple1).value("key"));
Review comment:
Could you please explain the flow of table() and get() if on the moment
of table() node4 didn't see reassignment event yet?
##########
File path:
modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
##########
@@ -340,6 +341,10 @@ public QueryProcessor queryEngine() {
return name;
}
+ @Override public void setBaseline(Set<String> baselineNodes) {
Review comment:
javadoc
##########
File path: modules/api/src/main/java/org/apache/ignite/Ignite.java
##########
@@ -44,4 +45,11 @@
* @return Ignite transactions.
*/
IgniteTransactions transactions();
+
+ /**
+ * Set new baseline nodes for table assignments.
+ *
+ * @param baselineNodes Names of baseline nodes.
+ */
+ void setBaseline(Set<String> baselineNodes);
Review comment:
What about null, empty baseline, unexisting nodes - what is an expected
behavior in given cases. Do we check this?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
Review comment:
javadoc
##########
File path: modules/api/src/main/java/org/apache/ignite/Ignite.java
##########
@@ -44,4 +45,11 @@
* @return Ignite transactions.
*/
IgniteTransactions transactions();
+
+ /**
+ * Set new baseline nodes for table assignments.
+ *
+ * @param baselineNodes Names of baseline nodes.
+ */
+ void setBaseline(Set<String> baselineNodes);
Review comment:
What about new tables? Whether new tables will be created relative to
the given baseline or relative to baselineMgr.nodes()?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
Review comment:
Is it really union?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
Review comment:
bad naming, it's not an assignments.
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
Review comment:
It worth to mention here, why we only choose of existing nodes and do
not update an assignment later on when new nodes appeared that do match
specified baseline.Though it's not a part of current approach, it worth to
mention it in code.
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
Review comment:
Seems that unionBaseline is always equals to currentBaseline. Is that
true?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
Review comment:
Seems that unionBaseline is always equals to currentBaseline. If it's
true, what's the point in doUpdateBaseline(unionBaseline)?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
Review comment:
javadoc
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
Review comment:
Seems that you didn't solve the issue of lagging node, that doesn't know
about tables at all, despite the fact that there are some.
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
Review comment:
It's actually not a baseline update but assignments update.
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
+ tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+ ExtendedTableChange change =
(ExtendedTableChange)changeX;
+ byte[] currAssignments = change.assignments();
+
+ List<List<ClusterNode>> recalculatedAssignments =
AffinityUtils.calculateAssignments(
+ clusterNodes,
+ change.partitions(),
+ change.replicas());
+
+ if
(!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+
change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+ changePeersQueue.add(() ->
+ updateRaftTopology(
+
(List<List<ClusterNode>>)ByteUtils.fromBytes(currAssignments),
+ recalculatedAssignments,
+ IgniteUuid.fromString(change.id())));
+ }
+ });
+ }
+ }).join();
Review comment:
What about exception handling?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
+ tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+ ExtendedTableChange change =
(ExtendedTableChange)changeX;
+ byte[] currAssignments = change.assignments();
+
+ List<List<ClusterNode>> recalculatedAssignments =
AffinityUtils.calculateAssignments(
+ clusterNodes,
+ change.partitions(),
+ change.replicas());
+
+ if
(!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+
change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+ changePeersQueue.add(() ->
Review comment:
Are u sure that change closures work that way? How many times u will put
tasks to queue if master-revision-invoke failed on first-n-iterations?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
+ tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+ ExtendedTableChange change =
(ExtendedTableChange)changeX;
+ byte[] currAssignments = change.assignments();
+
+ List<List<ClusterNode>> recalculatedAssignments =
AffinityUtils.calculateAssignments(
+ clusterNodes,
+ change.partitions(),
+ change.replicas());
+
+ if
(!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+
change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+ changePeersQueue.add(() ->
+ updateRaftTopology(
+
(List<List<ClusterNode>>)ByteUtils.fromBytes(currAssignments),
+ recalculatedAssignments,
+ IgniteUuid.fromString(change.id())));
+ }
+ });
+ }
+ }).join();
Review comment:
Why we need join here?
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
+ tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+ ExtendedTableChange change =
(ExtendedTableChange)changeX;
+ byte[] currAssignments = change.assignments();
+
+ List<List<ClusterNode>> recalculatedAssignments =
AffinityUtils.calculateAssignments(
+ clusterNodes,
+ change.partitions(),
+ change.replicas());
+
+ if
(!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+
change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+ changePeersQueue.add(() ->
+ updateRaftTopology(
+
(List<List<ClusterNode>>)ByteUtils.fromBytes(currAssignments),
+ recalculatedAssignments,
+ IgniteUuid.fromString(change.id())));
+ }
+ });
+ }
+ }).join();
+
+ for (Runnable task: changePeersQueue) {
+ task.run();
+ }
+ }
+
+ private void updateRaftTopology(List<List<ClusterNode>> oldAssignments,
List<List<ClusterNode>> newAssignments, IgniteUuid tblId) {
Review comment:
javadoc
##########
File path:
modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
##########
@@ -824,4 +885,141 @@ private void dropTableLocally(String name, IgniteUuid
tblId, List<List<ClusterNo
private boolean isTableConfigured(String name) {
return tableNamesConfigured().contains(name);
}
+
+ public void setBaseline(Set<String> nodes) {
+ var newAssignments = baselineMgr
+ .nodes().stream().filter(n ->
nodes.contains(n.name())).collect(Collectors.toSet());
+
+ var currentBaseline = new HashSet<>(baselineMgr.nodes());
+
+ var unionBaseline = new HashSet<>(newAssignments);
+ unionBaseline.addAll(currentBaseline);
+
+ doUpdateBaseline(unionBaseline);
+
+ if (!newAssignments.equals(unionBaseline))
+ doUpdateBaseline(newAssignments);
+ }
+
+ private void doUpdateBaseline(Set<ClusterNode> clusterNodes) {
+ var changePeersQueue = new ArrayList<Runnable>();
+
+ tablesCfg.tables().change(
+ tbls -> {
+ for (int i = 0; i < tbls.size(); i++) {
+ tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+ ExtendedTableChange change =
(ExtendedTableChange)changeX;
+ byte[] currAssignments = change.assignments();
+
+ List<List<ClusterNode>> recalculatedAssignments =
AffinityUtils.calculateAssignments(
+ clusterNodes,
+ change.partitions(),
+ change.replicas());
+
+ if
(!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+
change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+ changePeersQueue.add(() ->
+ updateRaftTopology(
+
(List<List<ClusterNode>>)ByteUtils.fromBytes(currAssignments),
+ recalculatedAssignments,
+ IgniteUuid.fromString(change.id())));
+ }
+ });
+ }
+ }).join();
+
+ for (Runnable task: changePeersQueue) {
+ task.run();
+ }
+ }
+
+ private void updateRaftTopology(List<List<ClusterNode>> oldAssignments,
List<List<ClusterNode>> newAssignments, IgniteUuid tblId) {
+ List<CompletableFuture<Void>> futures = new
ArrayList<>(oldAssignments.size());
+
+ // TODO: IGNITE-15554 Add logic for assignment recalculation in case
of partitions or replicas changes
+ // TODO: Until IGNITE-15554 is implemented it's safe to iterate over
partitions and replicas cause there will
+ // TODO: be exact same amount of partitions and replicas for both old
and new assignments
+ for (int i = 0; i < oldAssignments.size(); i++) {
+ final int p = i;
+
+ List<ClusterNode> oldPartitionAssignment = oldAssignments.get(p);
+ List<ClusterNode> newPartitionAssignment = newAssignments.get(p);
+
+ var toAdd = new HashSet<>(newPartitionAssignment);
+
+ toAdd.removeAll(oldPartitionAssignment);
+
+ futures.add(raftMgr.prepareRaftGroup(
+ raftGroupName(tblId, p),
+ oldPartitionAssignment,
+ () -> new RaftGroupListener() {
+ @Override public void
onRead(Iterator<CommandClosure<ReadCommand>> iterator) {
+
+ }
+
+ @Override public void
onWrite(Iterator<CommandClosure<WriteCommand>> iterator) {
+
+ }
+
+ @Override public void onSnapshotSave(Path path,
Consumer<Throwable> doneClo) {
+
+ }
+
+ @Override public boolean onSnapshotLoad(Path path) {
+ return false;
+ }
+
+ @Override public void onShutdown() {
+
+ }
+ },
+ 60000,
+ 10000
+ )
+ .thenCompose(
+ updatedRaftGroupService -> {
+ return
+ updatedRaftGroupService.
+ changePeers(
+ newPartitionAssignment.stream().map(n ->
new Peer(n.address())).collect(Collectors.toList()));
+ }
+ ).exceptionally(th -> {
+ LOG.error("Failed to update raft peers for group " +
raftGroupName(tblId, p) +
+ "from " + oldPartitionAssignment + " to " +
newPartitionAssignment, th);
+ return null;
+ }
+ ));
+ }
+
+ CompletableFuture.allOf(futures.toArray(new
CompletableFuture[futures.size()])).join();
Review comment:
Are you sure that join here and in other placed won't leed to a
deadlock? I mean known issue with blocking sendWithRetry response processing.
##########
File path:
modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
##########
@@ -50,6 +51,9 @@ public QueryProcessor queryEngine() {
return null;
}
+ @Override public void setBaseline(Set<String> baselineNodes) {
+ }
Review comment:
Agrred about UnsupportedOperationException instead of
OperationNotSupportedException
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]