sanpwc commented on a change in pull request #642:
URL: https://github.com/apache/ignite-3/pull/642#discussion_r832137321



##########
File path: 
modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
##########
@@ -140,31 +248,140 @@ public void twoCustomPropertiesTest(TestInfo testInfo) {
                 new String[]{"localhost:3344", "localhost:3343"},
                 
ignite.nodeConfiguration().getConfiguration(NetworkConfiguration.KEY).nodeFinder().netClusterNodes().value()
         );
-
-        IgnitionManager.stop(ignite.name());
     }
 
     /**
      * Restarts the node which stores some data.
      */
     @Test
-    @Disabled("https://issues.apache.org/jira/browse/IGNITE-16433";)
     public void nodeWithDataTest(TestInfo testInfo) {
-        String nodeName = testNodeName(testInfo, 3344);
-
-        Ignite ignite = IgnitionManager.start(nodeName, "{\n"
-                + "  \"node\": {\n"
-                + "    \"metastorageNodes\":[ " + nodeName + " ]\n"
-                + "  },\n"
-                + "  \"network\": {\n"
-                + "    \"port\":3344,\n"
-                + "    \"nodeFinder\": {\n"
-                + "      \"netClusterNodes\":[ \"localhost:3344\" ] \n"
-                + "    }\n"
-                + "  }\n"
-                + "}", workDir);
-
-        TableDefinition scmTbl1 = SchemaBuilders.tableBuilder("PUBLIC", 
TABLE_NAME).columns(
+        Ignite ignite = startNode(testInfo, 0);
+
+        createTableWithData(ignite, TABLE_NAME, 1);
+
+        stopNode(0);
+
+        ignite = startNode(testInfo, 0);
+
+        checkTableWithData(ignite, TABLE_NAME);
+    }
+
+    /**
+     * Starts two nodes and checks that the data are storing through restarts.
+     * Nodes restart in the same order when they started at first.
+     *
+     * @param testInfo Test information object.
+     */
+    @Test
+    public void testTwoNodesRestartDirect(TestInfo testInfo) {
+        twoNodesRestart(testInfo, true);
+    }
+
+    /**
+     * Starts two nodes and checks that the data are storing through restarts.
+     * Nodes restart in reverse order when they started at first.
+     *
+     * @param testInfo Test information object.
+     */
+    @Test
+    @Disabled("IGNITE-16034 Unblock a node start that happenes before 
Metastorage is ready")
+    public void testTwoNodesRestartReverse(TestInfo testInfo) {
+        twoNodesRestart(testInfo, false);
+    }
+
+    /**
+     * Starts two nodes and checks that the data are storing through restarts.
+     *
+     * @param testInfo Test information object.
+     * @param directOrder When the parameter is true, nodes restart in direct 
order, otherwise they restart in reverse order.
+     */
+    private void twoNodesRestart(TestInfo testInfo, boolean directOrder) {
+        Ignite ignite = startNode(testInfo, 0);
+
+        startNode(testInfo, 1);
+
+        createTableWithData(ignite, TABLE_NAME, 2);
+        createTableWithData(ignite, TABLE_NAME_2, 2);
+
+        stopNode(0);
+        stopNode(1);
+
+        if (directOrder) {
+            startNode(testInfo, 0);
+            ignite = startNode(testInfo, 1);
+        } else {
+            ignite = startNode(testInfo, 1);
+            startNode(testInfo, 0);
+        }
+
+        checkTableWithData(ignite, TABLE_NAME);
+        checkTableWithData(ignite, TABLE_NAME_2);
+    }
+
+    /**
+     * The test for node restart when there is a gap between the node local 
configuration and distributed configuration.
+     */
+    @Test
+    @Disabled("IGNITE-16718")
+    public void testCfgGap(TestInfo testInfo) {

Review comment:
       Could you please add tests for checking the catch up logic? Please pay 
attention that
   `
   ignite.tables().table("PUBLIC." + name);
   `
   from outdated node will await it's application from meta storage, so it 
won't be possible to check recovery logic using such table retrieval methods.
   
   By the way, I believe that it's still possible to check both catch up and 
local recovery despite the fact of IGNITE-16718 issue. Let's consider the 
scenarios of to-be-catch-up tables with **clients only** on node2 where all 
partitions of given tables are located on node1.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to