BryanCutler commented on a change in pull request #24095: [SPARK-27163][PYTHON]
Cleanup and consolidate Pandas UDF functionality
URL: https://github.com/apache/spark/pull/24095#discussion_r266582985
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/python/FlatMapGroupsInPandasExec.scala
##########
@@ -145,7 +146,15 @@ case class FlatMapGroupsInPandasExec(
sessionLocalTimeZone,
pythonRunnerConf).compute(grouped, context.partitionId(), context)
-
columnarBatchIter.flatMap(_.rowIterator.asScala).map(UnsafeProjection.create(output,
output))
+ columnarBatchIter.flatMap { batch =>
+ // Grouped Map UDF returns a StructType column in ColumnarBatch,
select the children here
+ // TODO: ColumnVector getChild is protected, so use ArrowColumnVector
which is public
+ val structVector = batch.column(0).asInstanceOf[ArrowColumnVector]
+ val outputVectors =
output.indices.map(structVector.getChild(_).asInstanceOf[ColumnVector])
Review comment:
>I think the logic itself is fine. But doesn't this mean we cannot support
nested structs in grouped map Pandas UDFs?
Nested structs were never supported in grouped map UDFs (I verified with
code prior to https://github.com/apache/spark/pull/23900). Part of the reason
for this is there is no explicit logical type for a struct in a Pandas
DataFrame. When creating a nested struct in pyarrow, then converting to
pandas, the struct column gets converted to a column of dictionaries, which
Spark could handle but brings some other complications. So this cleanup should
keep the functionality the same.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]