This is an automated email from the ASF dual-hosted git repository.

yangjie01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new bbe831575502 [SPARK-54555][PYTHON][TESTS][FOLLOW-UP] Doc build
bbe831575502 is described below

commit bbe8315755022a1a8066fbd22b6045902b4fbdfa
Author: Amanda Liu <[email protected]>
AuthorDate: Wed Dec 3 14:26:22 2025 +0800

    [SPARK-54555][PYTHON][TESTS][FOLLOW-UP] Doc build
    
    ### What changes were proposed in this pull request?
    
    Remove whitespace to restore docs build
    
    ### Why are the changes needed?
    
    Fix docs build in CI
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Locally run `make html` from python/docs directory
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No
    
    Closes #53298 from asl3/docbuild.
    
    Authored-by: Amanda Liu <[email protected]>
    Signed-off-by: yangjie01 <[email protected]>
---
 python/docs/source/tutorial/sql/arrow_pandas.rst | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/python/docs/source/tutorial/sql/arrow_pandas.rst 
b/python/docs/source/tutorial/sql/arrow_pandas.rst
index 386fe83b4821..608307266f1f 100644
--- a/python/docs/source/tutorial/sql/arrow_pandas.rst
+++ b/python/docs/source/tutorial/sql/arrow_pandas.rst
@@ -379,15 +379,10 @@ and tuples to strings can yield ambiguous results. Arrow 
Python UDFs, on the oth
 capabilities to standardize type coercion and address these issues effectively.
 
 Type coercion differences are introduced by the following changes:
-* Since Spark 4.2, Arrow optimization is enabled by default for regular Python 
UDFs.
-The full type coercion difference is summarized in the tables `here 
<https://github.com/apache/spark/pull/41706>`__.
-To disable Arrow optimization, set 
``spark.sql.execution.pythonUDF.arrow.enabled`` to false.
 
-* Since Spark 4.1, unnecessary conversion to pandas instances in 
Arrow-optimized Python UDF is removed in the serializer
-when ``spark.sql.legacy.execution.pythonUDF.pandas.conversion.enabled`` is 
disabled.
+* Since Spark 4.2, Arrow optimization is enabled by default for regular Python 
UDFs. The full type coercion difference is summarized in the tables `here 
<https://github.com/apache/spark/pull/41706>`__. To disable Arrow optimization, 
set ``spark.sql.execution.pythonUDF.arrow.enabled`` to false.
 
-The behavior difference is summarized in the tables `here 
<https://github.com/apache/spark/pull/51225>`__.
-To restore the legacy behavior, set 
``spark.sql.legacy.execution.pythonUDF.pandas.conversion.enabled`` to true.
+* Since Spark 4.1, unnecessary conversion to pandas instances in 
Arrow-optimized Python UDF is removed in the serializer when 
``spark.sql.legacy.execution.pythonUDF.pandas.conversion.enabled`` is disabled. 
The behavior difference is summarized in the tables `here 
<https://github.com/apache/spark/pull/51225>`__. To restore the legacy 
behavior, set 
``spark.sql.legacy.execution.pythonUDF.pandas.conversion.enabled`` to true.
 
 Usage Notes
 -----------


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to