BryanCutler commented on a change in pull request #23657:
[SPARK-26566][PYTHON][SQL] Upgrade Apache Arrow to version 0.12.0
URL: https://github.com/apache/spark/pull/23657#discussion_r251560592
##########
File path: python/pyspark/sql/types.py
##########
@@ -1688,7 +1688,10 @@ def _check_series_convert_date(series, data_type):
:param series: pandas.Series
:param data_type: a Spark data type for the series
"""
- if type(data_type) == DateType:
+ import pyarrow
+ from distutils.version import LooseVersion
+ # As of Arrow 0.12.0, date_as_objects is True by default, see ARROW-3910
+ if LooseVersion(pyarrow.__version__) < LooseVersion("0.12.0") and
type(data_type) == DateType:
Review comment:
> Probably we should add a check like we do for Python version check between
driver and worker, and have few global checks. Of course, we could do it
separately I guess.
Yeah, we could do this but it might not really be too big of deal. I think
eventually it will be sort of like Pandas versions, if they are close there
will probably be no issues. But with major versions might not be completely
compatible.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]