This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new a6dd6076d70 [SPARK-39273][PS][TESTS] Make PandasOnSparkTestCase 
inherit ReusedSQLTestCase
a6dd6076d70 is described below

commit a6dd6076d708713d11585bf7f3401d522ea48822
Author: Hyukjin Kwon <gurwls...@apache.org>
AuthorDate: Wed May 25 09:56:30 2022 +0900

    [SPARK-39273][PS][TESTS] Make PandasOnSparkTestCase inherit 
ReusedSQLTestCase
    
    ### What changes were proposed in this pull request?
    
    This PR proposes to make `PandasOnSparkTestCase` inherit 
`ReusedSQLTestCase`.
    
    ### Why are the changes needed?
    
    We don't need this:
    
    ```python
        classmethod
        def tearDownClass(cls):
            # We don't stop Spark session to reuse across all tests.
            # The Spark session will be started and stopped at PyTest session 
level.
            # Please see pyspark/pandas/conftest.py.
            pass
    ```
    
    anymore in Apache Spark. This has existed to speed up the tests when the 
codes are in Koalas repository where the tests run sequentially in single 
process.
    
    In Apache Spark, we run in multiple processes, and we don't need this 
anymore.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, test-only.
    
    ### How was this patch tested?
    
    Existing CI should test it out.
    
    Closes #36652 from HyukjinKwon/SPARK-39273.
    
    Authored-by: Hyukjin Kwon <gurwls...@apache.org>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 python/pyspark/testing/pandasutils.py | 17 ++++-------------
 1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/python/pyspark/testing/pandasutils.py 
b/python/pyspark/testing/pandasutils.py
index 9b07a23ae1b..baa43e5b9d5 100644
--- a/python/pyspark/testing/pandasutils.py
+++ b/python/pyspark/testing/pandasutils.py
@@ -18,7 +18,6 @@
 import functools
 import shutil
 import tempfile
-import unittest
 import warnings
 from contextlib import contextmanager
 from distutils.version import LooseVersion
@@ -32,9 +31,8 @@ from pyspark import pandas as ps
 from pyspark.pandas.frame import DataFrame
 from pyspark.pandas.indexes import Index
 from pyspark.pandas.series import Series
-from pyspark.pandas.utils import default_session, SPARK_CONF_ARROW_ENABLED
-from pyspark.testing.sqlutils import SQLTestUtils
-
+from pyspark.pandas.utils import SPARK_CONF_ARROW_ENABLED
+from pyspark.testing.sqlutils import ReusedSQLTestCase
 
 tabulate_requirement_message = None
 try:
@@ -61,19 +59,12 @@ except ImportError as e:
 have_plotly = plotly_requirement_message is None
 
 
-class PandasOnSparkTestCase(unittest.TestCase, SQLTestUtils):
+class PandasOnSparkTestCase(ReusedSQLTestCase):
     @classmethod
     def setUpClass(cls):
-        cls.spark = default_session()
+        super(PandasOnSparkTestCase, cls).setUpClass()
         cls.spark.conf.set(SPARK_CONF_ARROW_ENABLED, True)
 
-    @classmethod
-    def tearDownClass(cls):
-        # We don't stop Spark session to reuse across all tests.
-        # The Spark session will be started and stopped at PyTest session 
level.
-        # Please see pyspark/pandas/conftest.py.
-        pass
-
     def assertPandasEqual(self, left, right, check_exact=True):
         if isinstance(left, pd.DataFrame) and isinstance(right, pd.DataFrame):
             try:


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to