HyukjinKwon commented on a change in pull request #34509:
URL: https://github.com/apache/spark/pull/34509#discussion_r757974181
##########
File path: python/pyspark/sql/pandas/serializers.py
##########
@@ -169,6 +169,8 @@ def create_array(s, t):
elif is_categorical_dtype(s.dtype):
# Note: This can be removed once minimum pyarrow version is >=
0.16.1
s = s.astype(s.dtypes.categories.dtype)
+ elif t is not None and pa.types.is_string(t):
+ s = s.astype(str)
Review comment:
so are you saying that the type of strings from pandas can produce
different type in arrow? cc @BryanCutler
##########
File path: python/pyspark/sql/pandas/serializers.py
##########
@@ -169,6 +169,8 @@ def create_array(s, t):
elif is_categorical_dtype(s.dtype):
# Note: This can be removed once minimum pyarrow version is >=
0.16.1
s = s.astype(s.dtypes.categories.dtype)
+ elif t is not None and pa.types.is_string(t):
+ s = s.astype(str)
Review comment:
while I understand that we can work around,
> Pandas stores string columns in two different ways: using a numpy
`ndarray` or using a custom `StringArray`. The `StringArray` version is used
when specifing the `dtype=string`. When that happens, spark cannot serialize
the column to arrow.
This sounds like an issue in Arrow side.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]