Kimahriman commented on code in PR #793:
URL: https://github.com/apache/datafusion-comet/pull/793#discussion_r1710186036
##########
spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala:
##########
@@ -2348,6 +2348,27 @@ object QueryPlanSerde extends Logging with
ShimQueryPlanSerde with CometExprShim
.build()
}
+ // datafusion's make_array only supports nullable element types
+ case array @ CreateArray(children, _) if array.dataType.containsNull =>
+ val childExprs = children.map(exprToProto(_, inputs, binding))
+ val dataType = serializeDataType(array.dataType)
Review Comment:
Probably should be but was waiting to see if this was even the right way to
use a ScalarUDF. Looking at it some more I'm not sure how it could even be
updated with the way it currently works, since `ScalarUDFImpl.return_type` just
has `DataType` and not `Field` to know whether the elements are nullable or
not. Still learning how DataFusion works. Should I somehow be using the other
thing created by `make_udf_expr_and_func`:
```
datafusion_functions_nested::make_array
pub fn make_array(arg: Vec<datafusion_expr::Expr>) -> datafusion_expr::Expr
```
?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]