amogh-jahagirdar commented on code in PR #14614:
URL: https://github.com/apache/iceberg/pull/14614#discussion_r2539033301
##########
core/src/main/java/org/apache/iceberg/rest/requests/PlanTableScanRequestParser.java:
##########
@@ -101,6 +106,7 @@ public static PlanTableScanRequest fromJson(JsonNode json) {
Long snapshotId = JsonUtil.getLongOrNull(SNAPSHOT_ID, json);
Long startSnapshotId = JsonUtil.getLongOrNull(START_SNAPSHOT_ID, json);
Long endSnapshotId = JsonUtil.getLongOrNull(END_SNAPSHOT_ID, json);
+ Integer minRowsRequested = JsonUtil.getIntOrNull(MIN_ROWS_REQUESTED, json);
Review Comment:
I commented on the spec change, but are we sure we don't want to make the
core impl a long?
It's unlikely to want to express that large of a limit but it's
theoretically possible, and doesn't add too much difficulty to want to express
that.
I don't think it's necessarily appropriate to just go off the spark
interfaces.
The protocol can allow for longs, and then spark would just express the max
it can?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]