huaxingao commented on code in PR #14956:
URL: https://github.com/apache/iceberg/pull/14956#discussion_r2658728509


##########
spark/v4.1/spark/src/test/java/org/apache/iceberg/spark/sql/TestSelect.java:
##########
@@ -155,22 +157,19 @@ public void testSelectRewrite() {
 
   @TestTemplate
   public void selectWithLimit() {
-    Object[] first = row(1L, "a", 1.0F);
-    Object[] second = row(2L, "b", 2.0F);
-    Object[] third = row(3L, "c", Float.NaN);
-
-    // verify that LIMIT is properly applied in case 
SupportsPushDownLimit.isPartiallyPushed() is
-    // ever overridden in SparkScanBuilder
-    assertThat(sql("SELECT * FROM %s LIMIT 1", 
tableName)).containsExactly(first);
-    assertThat(sql("SELECT * FROM %s LIMIT 2", 
tableName)).containsExactly(first, second);
-    assertThat(sql("SELECT * FROM %s LIMIT 3", 
tableName)).containsExactly(first, second, third);
+    // Note: without ORDER BY, the specific rows returned are not 
deterministic, especially when
+    // remote scan planning or split planning changes the physical scan order. 
This test only
+    // asserts that the LIMIT is enforced.

Review Comment:
   Make sense. I change to `order by`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to