BryanCutler commented on a change in pull request #24095: [SPARK-27163][PYTHON]
Cleanup and consolidate Pandas UDF functionality
URL: https://github.com/apache/spark/pull/24095#discussion_r266586337
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/python/FlatMapGroupsInPandasExec.scala
##########
@@ -145,7 +146,15 @@ case class FlatMapGroupsInPandasExec(
sessionLocalTimeZone,
pythonRunnerConf).compute(grouped, context.partitionId(), context)
-
columnarBatchIter.flatMap(_.rowIterator.asScala).map(UnsafeProjection.create(output,
output))
+ columnarBatchIter.flatMap { batch =>
+ // Grouped Map UDF returns a StructType column in ColumnarBatch,
select the children here
+ // TODO: ColumnVector getChild is protected, so use ArrowColumnVector
which is public
+ val structVector = batch.column(0).asInstanceOf[ArrowColumnVector]
+ val outputVectors =
output.indices.map(structVector.getChild(_).asInstanceOf[ColumnVector])
Review comment:
>Another concern is tho .. I think all of Arrow implementations (including
SparkR ones) dont modify the batch's outputs but use the batch as are.
Yeah, makes the Scala side a bit different but I think it is worth it to
make things in `worker.py` more consistent. With this cleanup, all of the
Pandas UDFs go through the same logic to be serialized.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]