huaxingao commented on code in PR #5872:
URL: https://github.com/apache/iceberg/pull/5872#discussion_r981836950
##########
core/src/main/java/org/apache/iceberg/TableProperties.java:
##########
@@ -349,4 +349,7 @@ private TableProperties() {}
public static final String UPSERT_ENABLED = "write.upsert.enabled";
public static final boolean UPSERT_ENABLED_DEFAULT = false;
+
+ public static final String AGGREGATE_PUSHDOWN_ENABLED =
"aggregate.pushdown.enabled";
+ public static final String AGGREGATE_PUSHDOWN_ENABLED_DEFAULT = "false";
Review Comment:
Thanks for your comment!
I actually have thought about this when I wrote the code. The aggregate push
down logic is decided inside `SparkScanBuilder`. I was debating myself if I
should build the aggregates row inside `SparkScanBuilder` or `SparkLocalScan`.
It seems more natural to build the aggregates row in `SparkLocalScan` so I put
it there, but if I move this to `SparkScanBuilder`, then when I build the
aggregates row using statistic, if the statistic are not available, I can fall
back. I will change to that approach.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]