BryanCutler commented on a change in pull request #27358:
[SPARK-30640][PYTHON][SQL] Prevent unnecessary copies of data during Arrow to
Pandas conversion
URL: https://github.com/apache/spark/pull/27358#discussion_r370884250
##########
File path: python/pyspark/sql/pandas/serializers.py
##########
@@ -120,14 +120,17 @@ def __init__(self, timezone, safecheck,
assign_cols_by_name):
def arrow_to_pandas(self, arrow_column):
from pyspark.sql.pandas.types import _check_series_localize_timestamps
+ import pyarrow
# If the given column is a date type column, creates a series of
datetime.date directly
# instead of creating datetime64[ns] as intermediate data to avoid
overflow caused by
# datetime64[ns] type handling.
s = arrow_column.to_pandas(date_as_object=True)
- s = _check_series_localize_timestamps(s, self._timezone)
Review comment:
I don't know if this was causing the same issue, but it's easy enough to
just check the column type and only convert if necessary.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]