HyukjinKwon commented on a change in pull request #24807: [SPARK-27958][SQL]
Stopping a SparkSession should not always stop Spark Context
URL: https://github.com/apache/spark/pull/24807#discussion_r388680156
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
##########
@@ -981,17 +988,26 @@ object SparkSession extends Logging {
* @since 2.0.0
*/
def setActiveSession(session: SparkSession): Unit = {
- activeThreadSession.set(session)
+ if (session != getActiveSession.get && getActiveSession.isDefined) {
+ numActiveSessions.getAndIncrement
Review comment:
If the leak is problem, we should fix it rather than changing the behaviour.
It is documented and users are relying on this behaviour
We can understand `.stop()` is like `.stopContext()`, no?
I don't think we should just change without guarding. All other projects
related to Spark such as Zeppelin would need to revisit their behaviour about
how to stop, and it would make it difficult them to support multiple Spark
versions for instance.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]