nastra commented on code in PR #14615:
URL: https://github.com/apache/iceberg/pull/14615#discussion_r2564638470


##########
spark/v4.0/spark/src/test/java/org/apache/iceberg/spark/sql/TestSelect.java:
##########
@@ -150,6 +150,16 @@ public void testSelectRewrite() {
         .isEqualTo("(float IS NOT NULL AND is_nan(float))");
   }
 
+  @TestTemplate
+  public void selectWithLimit() {
+    Object[] first = row(1L, "a", 1.0F);
+    Object[] second = row(2L, "b", 2.0F);
+    Object[] third = row(3L, "c", Float.NaN);
+    assertThat(sql("SELECT * FROM %s LIMIT 1", 
tableName)).containsExactly(first);
+    assertThat(sql("SELECT * FROM %s LIMIT 2", 
tableName)).containsExactly(first, second);
+    assertThat(sql("SELECT * FROM %s LIMIT 3", 
tableName)).containsExactly(first, second, third);

Review Comment:
   > My understanding is this would succeed even without this change
   
   Yes that's absolutely correct. The LIMIT is applied by Spark even without 
any of the changes introduced in this PR.
   
   > how about inspecting the plan and extracting the plan node to check the 
limit if percolated deep down to the scan node
   
   I'm not sure about this, since this would effectively test that the limit is 
actually pushed down to the SparkScan, which means we're just making sure that 
implementing `SupportsPushDownLimit` has the right effect of pushing the limit. 
In the context of https://github.com/apache/iceberg/pull/10943 this check made 
sense, since that PR was adding a flag to enable/disable limit pushdown, but 
here we're always pushing it down and I've been going back and forth on the 
best way about properly testing this.
   
   Let me think a bit more about what an appropriate way of testing this would 
be



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to