BryanCutler commented on a change in pull request #24867: [SPARK-28041][PYTHON]
Increase minimum supported Pandas to 0.23.2
URL: https://github.com/apache/spark/pull/24867#discussion_r294000627
##########
File path: docs/sql-migration-guide-upgrade.md
##########
@@ -23,6 +23,10 @@ license: |
{:toc}
## Upgrading From Spark SQL 2.4 to 3.0
+ - Since Spark 3.0, PySpark requires a Pandas version of 0.23.2 or higher to
use Pandas related functionality, such as `toPandas`, `createDataFrame` from
Pandas DataFrame, etc.
+
+ - Since Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to
use PyArrow related functionality, such as `pandas_udf`, `toPandas` and
`createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
+
Review comment:
Added a note about the minimum pyarrow version. Further down here
https://github.com/apache/spark/pull/24867/files#diff-3f19ec3d15dcd8cd42bb25dde1c5c1a9L58
we talk about safe casting, which I think is still relevant so I won't modify
it, unless it seems confusing to talk about versions < 0.12.1?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]