HyukjinKwon commented on a change in pull request #23657: 
[SPARK-26566][PYTHON][SQL] Upgrade Apache Arrow to version 0.12.0
URL: https://github.com/apache/spark/pull/23657#discussion_r251230043
 
 

 ##########
 File path: python/pyspark/sql/types.py
 ##########
 @@ -1688,7 +1688,10 @@ def _check_series_convert_date(series, data_type):
     :param series: pandas.Series
     :param data_type: a Spark data type for the series
     """
-    if type(data_type) == DateType:
+    import pyarrow
+    from distutils.version import LooseVersion
+    # As of Arrow 0.12.0, date_as_objects is True by default, see ARROW-3910
+    if LooseVersion(pyarrow.__version__) < LooseVersion("0.12.0") and 
type(data_type) == DateType:
 
 Review comment:
   Yup, it will be called a lot (but not per record at least but per batch), 
and I also think it should be called once ideally.
   
   I roughly guess that this has been done in this way so far because I guess 
we're not sure about the versions in worker side and driver side .. For 
instance, both versions in both codes can be different as far as I know because 
we don't have a check for it (correct me if I am mistaken).
   
   Probably we should add a check like we do for Python version check between 
driver and worker, and have few global checks. Of course, we could do it 
separately I guess.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to