This is an automated email from the ASF dual-hosted git repository.

haejoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 46c5accaa551 [SPARK-49609][PYTHON][TESTS][FOLLOW-UP] Skip Spark 
Connect tests if dependencies are not found
46c5accaa551 is described below

commit 46c5accaa55101fe59bce916c17516a70fdfe134
Author: Hyukjin Kwon <[email protected]>
AuthorDate: Wed Sep 25 19:04:02 2024 +0900

    [SPARK-49609][PYTHON][TESTS][FOLLOW-UP] Skip Spark Connect tests if 
dependencies are not found
    
    ### What changes were proposed in this pull request?
    
    This PR is a followup of https://github.com/apache/spark/pull/48085 that 
skips the compatibility tests if Spark Connect dependencies are not installed.
    
    ### Why are the changes needed?
    
    To recover the PyPy3 build 
https://github.com/apache/spark/actions/runs/11016544408/job/30592416115 which 
does not have PyArrow installed.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, test-only.
    
    ### How was this patch tested?
    
    Manually.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No
    
    Closes #48239 from HyukjinKwon/SPARK-49609-followup.
    
    Authored-by: Hyukjin Kwon <[email protected]>
    Signed-off-by: Haejoon Lee <[email protected]>
---
 python/pyspark/sql/tests/test_connect_compatibility.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/python/pyspark/sql/tests/test_connect_compatibility.py 
b/python/pyspark/sql/tests/test_connect_compatibility.py
index ca1f828ef4d7..8f3e86f5186a 100644
--- a/python/pyspark/sql/tests/test_connect_compatibility.py
+++ b/python/pyspark/sql/tests/test_connect_compatibility.py
@@ -18,6 +18,7 @@
 import unittest
 import inspect
 
+from pyspark.testing.connectutils import should_test_connect, 
connect_requirement_message
 from pyspark.testing.sqlutils import ReusedSQLTestCase
 from pyspark.sql.classic.dataframe import DataFrame as ClassicDataFrame
 from pyspark.sql.connect.dataframe import DataFrame as ConnectDataFrame
@@ -172,6 +173,7 @@ class ConnectCompatibilityTestsMixin:
         )
 
 
[email protected](not should_test_connect, connect_requirement_message)
 class ConnectCompatibilityTests(ConnectCompatibilityTestsMixin, 
ReusedSQLTestCase):
     pass
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to