Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19592#discussion_r147891641
--- Diff: python/pyspark/worker.py ---
@@ -105,8 +105,14 @@ def read_single_udf(pickleSer, infile, eval_type):
elif eval_type == PythonEvalType.SQL_PANDAS_GROUPED_UDF:
# a groupby apply udf has already been wrapped under apply()
return arg_offsets, row_func
- else:
+ elif eval_type == PythonEvalType.SQL_BATCHED_UDF:
return arg_offsets, wrap_udf(row_func, return_type)
+ elif eval_type == PythonEvalType.SQL_BATCHED_OPT_UDF:
--- End diff --
Because the python functions are serialized and maybe broadcasted further,
I didn't figure out a way to do this wrapping in `BatchEvalPython` in Scala
side.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]