Kimahriman commented on code in PR #793:
URL: https://github.com/apache/datafusion-comet/pull/793#discussion_r1713950119


##########
spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala:
##########
@@ -2348,6 +2348,27 @@ object QueryPlanSerde extends Logging with 
ShimQueryPlanSerde with CometExprShim
               .build()
           }
 
+        // datafusion's make_array only supports nullable element types
+        case array @ CreateArray(children, _) if array.dataType.containsNull =>
+          val childExprs = children.map(exprToProto(_, inputs, binding))
+          val dataType = serializeDataType(array.dataType)

Review Comment:
   More digging made me realize this is a somewhat larger issue with 
ScalarUDFs, in that they don't support setting nullability at all (every 
ScalarUDF is assumed to be nullable). So how has this been handled elsewhere if 
at all? Is the best approach just to "pretend" the column is nullable for 
DataFusion, knowing that logically it should not contain any nulls and keep it 
as non-nullable on the Spark side? Otherwise any expression backed by a 
ScalarUDF can't support non-nullable expressions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to