BryanCutler commented on a change in pull request #23657: 
[SPARK-26566][PYTHON][SQL] Upgrade Apache Arrow to version 0.12.0
URL: https://github.com/apache/spark/pull/23657#discussion_r251559055
 
 

 ##########
 File path: python/pyspark/sql/types.py
 ##########
 @@ -1688,7 +1688,10 @@ def _check_series_convert_date(series, data_type):
     :param series: pandas.Series
     :param data_type: a Spark data type for the series
     """
-    if type(data_type) == DateType:
+    import pyarrow
+    from distutils.version import LooseVersion
+    # As of Arrow 0.12.0, date_as_objects is True by default, see ARROW-3910
+    if LooseVersion(pyarrow.__version__) < LooseVersion("0.12.0") and 
type(data_type) == DateType:
 
 Review comment:
   Yes, these are called per-batch and wouldn't add overhead that would be 
noticeable. I think these check will be temporary and could be removed once we 
change the minimum version and as Arrow gets more mature. For now, it's 
probably best to just make sure these kind of checks are easy to track.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to