This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 97976a5cc91 [SPARK-41034][CONNECT][TESTS][FOLLOWUP] `connectutils` 
should be skipped when pandas is not installed
97976a5cc91 is described below

commit 97976a5cc915597fd2606602d18c52c075a03bf6
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Mon Dec 5 22:35:25 2022 -0800

    [SPARK-41034][CONNECT][TESTS][FOLLOWUP] `connectutils` should be skipped 
when pandas is not installed
    
    ### What changes were proposed in this pull request?
    
    This PR aims to fix two errors.
    ```
    $ python/run-tests --testnames 
pyspark.sql.tests.connect.test_connect_column_expressions
    ...
    NameError: name 'SparkSession' is not defined
    
    ...
    NameError: name 'LogicalPlan' is not defined
    ```
    
    ### Why are the changes needed?
    
    Previously, `connect` tests are ignored when `pandas` is not available.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Manually.
    ```
    $ python/run-tests --testnames 
pyspark.sql.tests.connect.test_connect_column_expressions
    ...
    Finished test(python3): 
pyspark.sql.tests.connect.test_connect_column_expressions (0s) ... 9 tests were 
skipped
    Tests passed in 0 seconds
    
    Skipped tests in pyspark.sql.tests.connect.test_connect_column_expressions 
with python3:
          test_binary_literal 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.002s)
          test_column_alias 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_column_literals 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_float_nan_inf 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_map_literal 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_null_literal 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_numeric_literal_types 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_simple_column_expressions 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
          test_uuid_literal 
(pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite)
 ... skip (0.000s)
    ```
    
    Closes #38928 from dongjoon-hyun/SPARK-41034.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 python/pyspark/testing/connectutils.py | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/python/pyspark/testing/connectutils.py 
b/python/pyspark/testing/connectutils.py
index feca9e9f825..05df6b02e67 100644
--- a/python/pyspark/testing/connectutils.py
+++ b/python/pyspark/testing/connectutils.py
@@ -69,7 +69,8 @@ class MockRemoteSession:
 class PlanOnlyTestFixture(unittest.TestCase):
 
     connect: "MockRemoteSession"
-    session: SparkSession
+    if have_pandas:
+        session: SparkSession
 
     @classmethod
     def _read_table(cls, table_name: str) -> "DataFrame":
@@ -95,9 +96,10 @@ class PlanOnlyTestFixture(unittest.TestCase):
     def _session_sql(cls, query: str) -> "DataFrame":
         return DataFrame.withPlan(SQL(query), cls.connect)  # type: ignore
 
-    @classmethod
-    def _with_plan(cls, plan: LogicalPlan) -> "DataFrame":
-        return DataFrame.withPlan(plan, cls.connect)  # type: ignore
+    if have_pandas:
+        @classmethod
+        def _with_plan(cls, plan: LogicalPlan) -> "DataFrame":
+            return DataFrame.withPlan(plan, cls.connect)  # type: ignore
 
     @classmethod
     def setUpClass(cls: Any) -> None:


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to