Copilot commented on code in PR #17065:
URL: https://github.com/apache/iotdb/pull/17065#discussion_r2727086526


##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/schemaengine/schemaregion/mtree/impl/mem/MTreeBelowSGMemoryImpl.java:
##########
@@ -192,6 +193,22 @@ public synchronized boolean createSnapshot(final File 
snapshotDir) {
     return store.createSnapshot(snapshotDir);
   }
 
+  private void applySubtreeMeasurementDelta(IMemMNode startNode, final long 
delta) {
+    if (delta == 0 || startNode == null) {
+      return;
+    }
+    IMemMNode current = startNode;
+    while (current != null) {
+      current.setSubtreeMeasurementCount(current.getSubtreeMeasurementCount() 
+ delta);
+      current = current.getParent();
+    }
+  }
+
+  private long getTemplateMeasurementCount(final int templateId) {
+    final Template template = 
ClusterTemplateManager.getInstance().getTemplate(templateId);
+    return template == null ? 0L : template.getMeasurementNumber();

Review Comment:
   subtreeMeasurementCount is adjusted on template activation/deactivation 
using the current template measurement count (getTemplateMeasurementCount). 
Since templates can be extended at runtime (see 
ClusterSchemaManager.extendSchemaTemplate), the measurement count can change 
after activation; later deactivation will subtract a different value than was 
added, and the cached subtree counts can become stale or even negative. This 
can break OFFSET pruning correctness. Consider recording the activated template 
measurement count per device (so add/subtract are consistent), and/or updating 
subtreeMeasurementCount for all affected paths when a template is extended, or 
skipping subtree-count pruning for nodes affected by templates.
   ```suggestion
       // NOTE:
       // Templates can be extended at runtime, which changes their measurement 
count.
       // Using the current measurement count here for both activation and 
deactivation
       // can cause inconsistent deltas on subtreeMeasurementCount (e.g., 
subtracting
       // more/less than was previously added), leading to stale or even 
negative
       // subtree counts and incorrect OFFSET pruning.
       //
       // To avoid this correctness issue without per-device activation-time 
tracking,
       // we deliberately do *not* adjust subtreeMeasurementCount based on 
template
       // measurements and return 0 here. This effectively disables 
subtree-count
       // pruning for template-driven paths but preserves correctness.
       return 0L;
   ```



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/schemaengine/schemaregion/mtree/impl/mem/MTreeBelowSGMemoryImpl.java:
##########
@@ -1123,6 +1150,23 @@ public long countPathsUsingTemplate(final PartialPath 
pathPattern, final int tem
     }
   }
 
+  public void rebuildSubtreeMeasurementCount() {
+    rebuildSubtreeMeasurementCountFromNode(rootNode);
+  }
+
+  private long rebuildSubtreeMeasurementCountFromNode(final IMemMNode node) {
+    long count = node.isMeasurement() ? 1L : 0L;
+    final IMNodeIterator<IMemMNode> iterator = store.getChildrenIterator(node);
+    while (iterator.hasNext()) {
+      count += rebuildSubtreeMeasurementCountFromNode(iterator.next());

Review Comment:
   rebuildSubtreeMeasurementCountFromNode() obtains an IMNodeIterator from 
store.getChildrenIterator(node) but never calls iterator.close(). Today the 
in-memory iterator close is mostly a no-op, but other IMNodeIterator 
implementations can hold resources; closing in a finally block would make this 
method safer and consistent with other iterator usages.
   ```suggestion
       try {
         while (iterator.hasNext()) {
           count += rebuildSubtreeMeasurementCountFromNode(iterator.next());
         }
       } finally {
         iterator.close();
   ```



##########
integration-test/src/test/java/org/apache/iotdb/db/it/schema/IoTDBShowTimeseriesOrderByTimeseriesIT.java:
##########
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.it.schema;
+
+import org.apache.iotdb.commons.schema.column.ColumnHeaderConstant;
+import org.apache.iotdb.it.env.EnvFactory;
+import org.apache.iotdb.itbase.category.ClusterIT;
+import org.apache.iotdb.itbase.category.LocalStandaloneIT;
+import org.apache.iotdb.util.AbstractSchemaIT;
+
+import org.junit.After;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runners.Parameterized;
+
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+@Category({LocalStandaloneIT.class, ClusterIT.class})
+public class IoTDBShowTimeseriesOrderByTimeseriesIT extends AbstractSchemaIT {
+
+  private static final List<String> BASE_TIMESERIES =
+      Arrays.asList(
+          "root.db1.devA.m1",
+          "root.db1.devA.m2",
+          "root.db1.devB.m1",
+          "root.db1.devB.x",
+          "root.db2.devA.m1",
+          "root.db2.devC.m0",
+          "root.db2.devC.m3",
+          "root.db3.z.m1",
+          "root.db3.z.m10",
+          "root.db3.z.m2");
+
+  public IoTDBShowTimeseriesOrderByTimeseriesIT(SchemaTestMode schemaTestMode) 
{
+    super(schemaTestMode);
+  }
+
+  @Parameterized.BeforeParam
+  public static void before() throws Exception {
+    setUpEnvironment();
+    EnvFactory.getEnv().initClusterEnvironment();
+  }
+
+  @Parameterized.AfterParam
+  public static void after() throws Exception {
+    EnvFactory.getEnv().cleanClusterEnvironment();
+    tearDownEnvironment();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    clearSchema();
+  }
+
+  private void prepareComplexSchema() throws Exception {
+    try (Connection connection = EnvFactory.getEnv().getConnection();
+        Statement statement = connection.createStatement()) {
+      statement.execute("CREATE DATABASE root.db1");
+      statement.execute("CREATE DATABASE root.db2");
+      statement.execute("CREATE DATABASE root.db3");
+
+      for (String ts : BASE_TIMESERIES) {
+        statement.execute(
+            String.format(
+                "create timeseries %s with datatype=INT32, encoding=RLE, 
compression=SNAPPY", ts));
+      }
+    }
+  }
+
+  private List<String> queryTimeseries(final String sql) throws Exception {
+    try (Connection connection = EnvFactory.getEnv().getConnection();
+        Statement statement = connection.createStatement();
+        ResultSet resultSet = statement.executeQuery(sql)) {
+      List<String> result = new ArrayList<>();
+      while (resultSet.next()) {
+        result.add(resultSet.getString(ColumnHeaderConstant.TIMESERIES));
+      }
+      return result;
+    }
+  }
+
+  @Test
+  public void testOrderAscWithoutLimit() throws Exception {
+    prepareComplexSchema();
+    List<String> expected = new ArrayList<>(BASE_TIMESERIES);
+    Collections.sort(expected);
+
+    List<String> actual = queryTimeseries("show timeseries root.db*.** order 
by timeseries");
+    assertEquals(expected, actual);
+  }
+
+  @Test
+  public void testOrderDescWithOffsetLimit() throws Exception {
+    prepareComplexSchema();
+    List<String> expected = new ArrayList<>(BASE_TIMESERIES);
+    Collections.sort(expected);
+    Collections.reverse(expected);
+    expected = expected.subList(2, 6); // offset 2 limit 4
+
+    List<String> actual =
+        queryTimeseries("show timeseries root.db*.** order by timeseries desc 
offset 2 limit 4");
+    assertEquals(expected, actual);
+  }
+
+  @Test
+  public void testInsertThenQueryOrder() throws Exception {
+    prepareComplexSchema();
+    try (Connection connection = EnvFactory.getEnv().getConnection();
+        Statement statement = connection.createStatement()) {
+      statement.execute(
+          "create timeseries root.db0.devX.a with datatype=INT32, 
encoding=RLE, compression=SNAPPY");
+    }
+
+    List<String> expected = new ArrayList<>(BASE_TIMESERIES);
+    expected.add("root.db0.devX.a");
+    Collections.sort(expected);
+
+    List<String> actual = queryTimeseries("show timeseries root.db*.** order 
by timeseries");
+    assertEquals(expected, actual);
+  }
+
+  @Test
+  public void testDeleteSubtreeThenQueryOrder() throws Exception {
+    prepareComplexSchema();
+    try (Connection connection = EnvFactory.getEnv().getConnection();
+        Statement statement = connection.createStatement()) {
+      statement.execute("delete timeseries root.db2.devC.**");
+    }
+
+    List<String> expected = new ArrayList<>(BASE_TIMESERIES);
+    expected.remove("root.db2.devC.m0");
+    expected.remove("root.db2.devC.m3");
+    Collections.sort(expected);
+
+    List<String> actual = queryTimeseries("show timeseries root.db*.** order 
by timeseries");
+    assertEquals(expected, actual);
+  }
+
+  @Test
+  public void testOffsetLimitAfterDeletesAndAdds() throws Exception {
+    prepareComplexSchema();
+    try (Connection connection = EnvFactory.getEnv().getConnection();
+        Statement statement = connection.createStatement()) {
+      statement.execute("delete timeseries root.db1.devB.x");
+      statement.execute(
+          "create timeseries root.db1.devC.m0 with datatype=INT32, 
encoding=RLE, compression=SNAPPY");
+      statement.execute(
+          "create timeseries root.db4.devZ.z with datatype=INT32, 
encoding=RLE, compression=SNAPPY");
+    }
+
+    List<String> expected = new ArrayList<>(BASE_TIMESERIES);
+    expected.remove("root.db1.devB.x");
+    expected.add("root.db1.devC.m0");
+    expected.add("root.db4.devZ.z");
+    Collections.sort(expected);
+    expected = expected.subList(5, 10); // offset 5 limit 5
+
+    List<String> actual =
+        queryTimeseries("show timeseries root.db*.** order by timeseries 
offset 5 limit 5");
+    assertEquals(expected, actual);
+  }

Review Comment:
   The new tests validate basic ordering/offset/limit behavior, but they don't 
cover the interaction of ORDER BY TIMESERIES with a timeseries WHERE clause + 
OFFSET (schemaFilter). Given offset is now pushed down in single-region 
queries, adding a case like "SHOW TIMESERIES ... WHERE timeseries contains ... 
ORDER BY TIMESERIES OFFSET ..." would help prevent regressions in offset 
semantics under filtering.



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/schemaengine/schemaregion/mtree/impl/mem/MTreeBelowSGMemoryImpl.java:
##########
@@ -1435,6 +1479,46 @@ public ISchemaReader<ITimeSeriesSchemaInfo> 
getTimeSeriesReader(
             showTimeSeriesPlan.isPrefixMatch(),
             showTimeSeriesPlan.getScope()) {
 
+          private long remainingOffset = Math.max(0, 
showTimeSeriesPlan.getOffset());
+
+          private boolean shouldPruneSubtree(final IMemMNode node) {
+            if (remainingOffset <= 0) {
+              return false;
+            }
+            final long subtreeCount = node.getSubtreeMeasurementCount();
+            if (subtreeCount <= remainingOffset) {
+              remainingOffset -= subtreeCount;
+              return true;
+            }
+            return false;
+          }
+
+          @Override
+          protected boolean acceptFullMatchedNode(final IMemMNode node) {
+            if (!node.isMeasurement()) {
+              return false;
+            }
+            if (remainingOffset > 0) {
+              // skip this measurement
+              remainingOffset--;
+              return false;
+            }
+            return true;
+          }

Review Comment:
   In single-schema-region queries, OFFSET is now consumed inside 
MeasurementCollector (remainingOffset/shouldPruneSubtree/acceptFullMatchedNode) 
before TimeseriesReaderWithViewFetch applies schemaFilter. This changes 
semantics when a timeseries WHERE clause is present: OFFSET should apply to the 
filtered result set, not to the pre-filter traversal order. Consider disabling 
the offset-pruning pushdown when showTimeSeriesPlan.getSchemaFilter() is 
non-null (keep SchemaReaderLimitOffsetWrapper offset), or push schemaFilter 
evaluation into the collector so remainingOffset is decremented/pruned only for 
matched results.



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/plan/statement/metadata/ShowStatement.java:
##########
@@ -59,6 +59,13 @@ public void setOffset(long offset) {
     this.offset = offset;
   }
 
+  public long getLimitWithOffset() {
+    if (limit <= 0) {
+      return limit;
+    }

Review Comment:
   getLimitWithOffset() returns limit + offset without overflow handling. With 
large LIMIT/OFFSET literals this can overflow to a negative value and lead to 
incorrect planning. Consider using Math.addExact with a friendly error, or 
clamping to Long.MAX_VALUE when the sum would overflow.
   ```suggestion
       }
       // Guard against overflow when adding limit and offset.
       if (offset > 0 && limit > Long.MAX_VALUE - offset) {
         // Clamp to Long.MAX_VALUE to avoid negative or incorrect values due 
to overflow.
         return Long.MAX_VALUE;
       }
   ```



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/schemaengine/schemaregion/mtree/impl/mem/mnode/basic/BasicMNode.java:
##########
@@ -46,6 +46,9 @@ public class BasicMNode implements IMemMNode {
   private IMemMNode parent;
   private final BasicMNodeInfo basicMNodeInfo;
 
+  /** Cached count of measurements in this node's subtree, restored on 
restart. */

Review Comment:
   The new subtreeMeasurementCount field is described as "restored on restart", 
but this value is not persisted and is rebuilt in memory (e.g., via 
rebuildSubtreeMeasurementCount()). Consider rewording to avoid implying 
persistence (e.g., "rebuilt on restart" / "recomputed on load").
   ```suggestion
     /** Cached count of measurements in this node's subtree, rebuilt on 
restart. */
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to