Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22295#discussion_r224983406
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -2633,6 +2633,23 @@ def sequence(start, stop, step=None):
                 _to_java_column(start), _to_java_column(stop), 
_to_java_column(step)))
     
     
    +@since(3.0)
    +def getActiveSession():
    +    """
    +    Returns the active SparkSession for the current thread
    +    """
    +    from pyspark.sql import SparkSession
    +    sc = SparkContext._active_spark_context
    --- End diff --
    
    Yea, we should match the behaviour with Scala side - that was my point 
essentially. The problem about the previous approach was that session was being 
handled within Python - I believe we will basically reuse JVM's session 
implementation rather than reimplementing the seperate Python session support 
within PySpark side.
    
    > What about if sc isNone we just return Nonesince we can't have an 
activeSession without an active SparkContext -- does that sound reasonable?
    
    In that case, I think we should follow Scala's behaviour.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to