HyukjinKwon commented on a change in pull request #27383: 
[SPARK-29367][DOC][2.4] Add compatibility note for Arrow 0.15.0 to SQL guide
URL: https://github.com/apache/spark/pull/27383#discussion_r372706570
 
 

 ##########
 File path: docs/sql-pyspark-pandas-with-arrow.md
 ##########
 @@ -165,3 +165,20 @@ Note that a standard UDF (non-Pandas) will load timestamp 
data as Python datetim
 different than a Pandas timestamp. It is recommended to use Pandas time series 
functionality when
 working with timestamps in `pandas_udf`s to get the best performance, see
 [here](https://pandas.pydata.org/pandas-docs/stable/timeseries.html) for 
details.
+
+### Compatibiliy Setting for PyArrow >= 0.15.0 and Spark 2.3.x, 2.4.x
+
+Since Arrow 0.15.0, a change in the binary IPC format requires an environment 
variable to be
+compatible with previous versions of Arrow <= 0.14.1. This is only necessary 
to do for PySpark
+users with versions 2.3.x and 2.4.x that have manually upgraded PyArrow to 
0.15.0. The following
+can be added to `conf/spark-env.sh` to use the legacy Arrow IPC format:
+
+```
+ARROW_PRE_0_15_IPC_FORMAT=1
+```
+
+This will instruct PyArrow >= 0.15.0 to use the legacy IPC format with the 
older Arrow Java that
+is in Spark 2.3.x and 2.4.x. Not setting this environment variable will lead 
to a similar error as
 
 Review comment:
   Yeah, I am not very fond of mentioning 2.3.x either but I guess it's fine.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to