This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 78e77a78372 [SPARK-40006][PYTHON][DOCS][FOLLOW-UP] Remove unused Spark 
context and duplicated Spark session initialization
78e77a78372 is described below

commit 78e77a78372c0028051f558c5e6f82decec88fd1
Author: Hyukjin Kwon <[email protected]>
AuthorDate: Wed Aug 10 10:22:33 2022 +0900

    [SPARK-40006][PYTHON][DOCS][FOLLOW-UP] Remove unused Spark context and 
duplicated Spark session initialization
    
    ### What changes were proposed in this pull request?
    
    This PR is a followup of https://github.com/apache/spark/pull/37437 which 
missed to remove unused `sc` and duplicated Spark session initialization.
    
    ### Why are the changes needed?
    
    To make the consistent example, and remove unused variables.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No. It's a documentation change. However, the previous PR is not released 
yet.
    
    ### How was this patch tested?
    
    Ci in this PR should test it out.
    
    Closes #37457 from HyukjinKwon/SPARK-40006-followup.
    
    Authored-by: Hyukjin Kwon <[email protected]>
    Signed-off-by: Hyukjin Kwon <[email protected]>
---
 python/pyspark/sql/group.py | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/python/pyspark/sql/group.py b/python/pyspark/sql/group.py
index 2fbe76aa5ae..71cc693882c 100644
--- a/python/pyspark/sql/group.py
+++ b/python/pyspark/sql/group.py
@@ -414,7 +414,6 @@ class GroupedData(PandasGroupedOpsMixin):
         Examples
         --------
         >>> from pyspark.sql import Row
-        >>> spark = SparkSession.builder.master("local[4]").appName("sql.group 
tests").getOrCreate()
         >>> df1 = spark.createDataFrame([
         ...     Row(course="dotNET", year=2012, earnings=10000),
         ...     Row(course="Java", year=2012, earnings=20000),
@@ -491,8 +490,6 @@ def _test() -> None:
 
     globs = pyspark.sql.group.__dict__.copy()
     spark = SparkSession.builder.master("local[4]").appName("sql.group 
tests").getOrCreate()
-    sc = spark.sparkContext
-    globs["sc"] = sc
     globs["spark"] = spark
 
     (failure_count, test_count) = doctest.testmod(


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to