Copilot commented on code in PR #2477:
URL: https://github.com/apache/fluss/pull/2477#discussion_r2740382297


##########
fluss-flink/fluss-flink-common/src/test/java/org/apache/fluss/flink/source/ChangelogVirtualTableITCase.java:
##########
@@ -377,6 +377,25 @@ public void testChangelogWithScanStartupMode() throws 
Exception {
                 .containsExactlyInAnyOrder(
                         "+I[+I, 3, 1970-01-01T00:00:00.200Z, 4, v4]",
                         "+I[+I, 4, 1970-01-01T00:00:00.200Z, 5, v5]");
+
+        // 3. Test scan.startup.mode='latest' - should only read new records 
after subscription
+        String optionsLatest = " /*+ OPTIONS('scan.startup.mode' = 'latest') 
*/";
+        String queryLatest = "SELECT id FROM startup_mode_test$changelog" + 
optionsLatest;
+        CloseableIterator<Row> rowIterLatest = 
tEnv.executeSql(queryLatest).collect();
+        List<String> latestResults = new ArrayList<>();
+        for (int attempt = 0; attempt < 10; attempt++) {
+            // Write a new record (with id larger than 5)
+            int rowId = 6 + attempt;
+            writeRows(conn, tablePath, Arrays.asList(row(rowId, "v" + rowId)), 
false);
+
+            // Try to fetch one record with a 5-second timeout
+            latestResults = collectRowsWithTimeout(rowIterLatest, 1, 
Duration.ofSeconds(5));

Review Comment:
   The hardcoded 5-second timeout per attempt combined with 10 attempts could 
result in up to 50 seconds of waiting. Consider reducing the timeout or number 
of attempts, or explaining why this duration is necessary to avoid 
unnecessarily long test execution times.
   ```suggestion
           for (int attempt = 0; attempt < 5; attempt++) {
               // Write a new record (with id larger than 5)
               int rowId = 6 + attempt;
               writeRows(conn, tablePath, Arrays.asList(row(rowId, "v" + 
rowId)), false);
   
               // Try to fetch one record with a 1-second timeout
               latestResults = collectRowsWithTimeout(rowIterLatest, 1, 
Duration.ofSeconds(1));
   ```



##########
fluss-flink/fluss-flink-common/src/test/java/org/apache/fluss/flink/source/ChangelogVirtualTableITCase.java:
##########
@@ -377,6 +377,25 @@ public void testChangelogWithScanStartupMode() throws 
Exception {
                 .containsExactlyInAnyOrder(
                         "+I[+I, 3, 1970-01-01T00:00:00.200Z, 4, v4]",
                         "+I[+I, 4, 1970-01-01T00:00:00.200Z, 5, v5]");
+
+        // 3. Test scan.startup.mode='latest' - should only read new records 
after subscription
+        String optionsLatest = " /*+ OPTIONS('scan.startup.mode' = 'latest') 
*/";
+        String queryLatest = "SELECT id FROM startup_mode_test$changelog" + 
optionsLatest;
+        CloseableIterator<Row> rowIterLatest = 
tEnv.executeSql(queryLatest).collect();
+        List<String> latestResults = new ArrayList<>();
+        for (int attempt = 0; attempt < 10; attempt++) {
+            // Write a new record (with id larger than 5)
+            int rowId = 6 + attempt;
+            writeRows(conn, tablePath, Arrays.asList(row(rowId, "v" + rowId)), 
false);
+
+            // Try to fetch one record with a 5-second timeout
+            latestResults = collectRowsWithTimeout(rowIterLatest, 1, 
Duration.ofSeconds(5));
+            if (!latestResults.isEmpty()) {
+                break;
+            }
+        }
+        int id = 
Integer.parseInt(latestResults.getFirst().replaceAll(".*\\[(\\d+)]", "$1"));

Review Comment:
   The regex pattern assumes a specific row format. If 
`latestResults.getFirst()` doesn't match the expected pattern, `parseInt` will 
throw a `NumberFormatException`. Consider adding validation or using a more 
robust parsing approach to handle unexpected formats gracefully.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to