attilapiros commented on PR #39775:
URL: https://github.com/apache/spark/pull/39775#issuecomment-1494119540

   @srowen 
   
   > Should this be specific to Kubernetes? 
   
   The original https://github.com/apache/spark/pull/32283 was Kubernetes 
specific. This PR just adds a new config to have the old behaviour as default 
but make the new one also available. 
   
   > Does it need to be a config or a method you can call? 
   
   Unfortunately there is a use case for both behaviour. See the next point.
   
   > Actually, why would you not kill the contexts after main exits in any case?
   
   I bumped into this change when I analysed an application where spark was 
used as a job server. 
   With the older spark it was running just fine but after an update it was 
stopping in the very beginning.
   The app was built on top of springboot where job requests was served via 
REST.     
   In a springboot app the main method just initialises / registers the REST 
handlers and the serving of the new requests done on separate threads. With the 
new behaviour the Spark context was closed right after the initialisation.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to