BryanCutler commented on a change in pull request #24177: [SPARK-27240][PYTHON]
Use pandas DataFrame for struct type argument in Scalar Pandas UDF.
URL: https://github.com/apache/spark/pull/24177#discussion_r268453103
##########
File path: python/pyspark/serializers.py
##########
@@ -378,6 +379,29 @@ class
ArrowStreamPandasUDFSerializer(ArrowStreamPandasSerializer):
Serializer used by Python worker to evaluate Pandas UDFs
"""
+ def __init__(self, timezone, safecheck, assign_cols_by_name,
df_for_struct=False):
+ super(ArrowStreamPandasUDFSerializer, self) \
+ .__init__(timezone, safecheck, assign_cols_by_name)
+ self._df_for_struct = df_for_struct
+
+ def arrow_to_pandas(self, arrow_column, data_type):
+ from pyspark.sql.types import StructType, \
+ _arrow_column_to_pandas, _check_dataframe_localize_timestamps
+
+ if self._df_for_struct and type(data_type) == StructType:
+ import pandas as pd
+ import pyarrow as pa
+ column_arrays = zip(*[[chunk.field(i)
+ for i in range(chunk.type.num_children)]
+ for chunk in arrow_column.data.iterchunks()])
Review comment:
it might be best to avoid dealing with array chunks and keep this high level
if possible. Would it be possible to build the Pandas DataFrame by flattening
the Arrow column, building a table from those and then converting that to
pandas? Something like this I think:
```
pdf = pa.Table.from_arrays(arrow_column.flatten()).to_pandas()
```
I'm not sure if the column names in the pdf would end up as expected
though...
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]