dongjoon-hyun commented on a change in pull request #27383: [SPARK-29367][DOC][2.4] Add compatibility note for Arrow 0.15.0 to SQL guide URL: https://github.com/apache/spark/pull/27383#discussion_r372601905
########## File path: docs/sql-pyspark-pandas-with-arrow.md ########## @@ -165,3 +165,20 @@ Note that a standard UDF (non-Pandas) will load timestamp data as Python datetim different than a Pandas timestamp. It is recommended to use Pandas time series functionality when working with timestamps in `pandas_udf`s to get the best performance, see [here](https://pandas.pydata.org/pandas-docs/stable/timeseries.html) for details. + +### Compatibiliy Setting for PyArrow >= 0.15.0 and Spark 2.3.x, 2.4.x + +Since Arrow 0.15.0, a change in the binary IPC format requires an environment variable to be +compatible with previous versions of Arrow <= 0.14.1. This is only necessary to do for PySpark +users with versions 2.3.x and 2.4.x that have manually upgraded PyArrow to 0.15.0. The following +can be added to `conf/spark-env.sh` to use the legacy Arrow IPC format: + +``` +ARROW_PRE_0_15_IPC_FORMAT=1 Review comment: Is this the only setting we need? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
