dylanhz commented on code in PR #25291:
URL: https://github.com/apache/flink/pull/25291#discussion_r1746639019
##########
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/utils/AggregateUtil.scala:
##########
@@ -501,9 +497,17 @@ object AggregateUtil extends Enumeration {
hasStateBackedDataViews: Boolean,
needsRetraction: Boolean): AggregateInfo =
call.getAggregation match {
+ // In the new function stack, for imperativeFunction, the conversion from
+ // BuiltInFunctionDefinition to SqlAggFunction is unnecessary, we can
simply create
+ // AggregateInfo through BuiltInFunctionDefinition and runtime
implementation (obtained from
+ // AggFunctionFactory) directly.
+ // NOTE: make sure to use .runtimeProvided() in
BuiltInFunctionDefinition in this case.
Review Comment:
if `.runtimeProvided()` and `.runtimeClass()` both unused and without a
conversion rule, the automatic conversion from `BuiltInFunctionDefinition` to
`BridgingSqlAggFunction` will fail because implementation not found, and
eventually throw an error, this checking logic is in
`FunctionCatalogOperatorTable#verifyFunctionKind()`.
For those agg functions with a conversion rule, this is ok because even
though the above process fails, they can still create `SqlAggFunction` through
the conversion rule. Also, these functions are supposed to use
`.runtimeDeferred()` as a mark of using conversion rule, but only a few them
actually did that.
BTW, this implementation checking will not be conducted in Table API which
may be a potential problem. In fact, if you delete `.runtimeProvided()` in
definition, calling function in Table API can still work while SQL can't.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]