srowen commented on issue #24807: [SPARK-27958][SQL] Stopping a SparkSession 
should not always stop Spark Context
URL: https://github.com/apache/spark/pull/24807#issuecomment-500879466
 
 
   I think the SparkSession.close() behavior is on purpose, and that's a 
coherent behavior (i.e. just don't shut anything down until you're done, and 
then everything shuts down). What's not consistent with that is maintaining 
some state in the session that can't be cleared. 
   
   I think the ways forward are probably:
   - A new lifecycle method like `clear()`? more user burden but at least 
provides _some_ means of doing cleanup without changing `close()`
   - Figure out how to automatically dispose of those resources or not hold them
   - Just change the behavior of session's `close()` to not shut down the 
context. Behavior change, yes, but perhaps less surprising than anything.
   
   Eh, do people like @cloud-fan or @gatorsmile or @HyukjinKwon or 
@dongjoon-hyun have thoughts on this? I feel like reference counting is going 
to end in tears here eventually, but, it's not crazy

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to