This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new b896b17  [SPARK-32301][PYTHON][TESTS] Add a test case for toPandas to 
work with empty partitioned Spark DataFrame
b896b17 is described below

commit b896b173eae6a2f323498debb32b0979441d0126
Author: HyukjinKwon <[email protected]>
AuthorDate: Wed Jul 15 08:44:48 2020 +0900

    [SPARK-32301][PYTHON][TESTS] Add a test case for toPandas to work with 
empty partitioned Spark DataFrame
    
    ### What changes were proposed in this pull request?
    
    This PR proposes to port the test case from 
https://github.com/apache/spark/pull/29098 to branch-3.0 and master.  In the 
master and branch-3.0, this was fixed together at 
https://github.com/apache/spark/commit/ecaa495b1fe532c36e952ccac42f4715809476af 
but no partition case is not being tested.
    
    ### Why are the changes needed?
    
    To improve test coverage.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, test-only.
    
    ### How was this patch tested?
    
    Unit test was forward-ported.
    
    Closes #29099 from HyukjinKwon/SPARK-32300-1.
    
    Authored-by: HyukjinKwon <[email protected]>
    Signed-off-by: HyukjinKwon <[email protected]>
    (cherry picked from commit 676d92ecceb3d46baa524c725b9f9a14450f1e9d)
    Signed-off-by: HyukjinKwon <[email protected]>
---
 python/pyspark/sql/tests/test_arrow.py | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/python/pyspark/sql/tests/test_arrow.py 
b/python/pyspark/sql/tests/test_arrow.py
index 1386f8d..7a41d24 100644
--- a/python/pyspark/sql/tests/test_arrow.py
+++ b/python/pyspark/sql/tests/test_arrow.py
@@ -421,6 +421,13 @@ class ArrowTests(ReusedSQLTestCase):
             self.spark.createDataFrame(
                 pd.DataFrame({'a': [1, 2, 3]}, index=[2., 3., 
4.])).distinct().count(), 3)
 
+    def test_no_partition_toPandas(self):
+        # SPARK-32301: toPandas should work from a Spark DataFrame with no 
partitions
+        # Forward-ported from SPARK-32300.
+        pdf = self.spark.sparkContext.emptyRDD().toDF("col1 int").toPandas()
+        self.assertEqual(len(pdf), 0)
+        self.assertEqual(list(pdf.columns), ["col1"])
+
 
 @unittest.skipIf(
     not have_pandas or not have_pyarrow,


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to