Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/1680#issuecomment-50949931
  
    Had an offline discussion with @davies, who suggested that we send SIGKILL 
instead of SIGHUP to kill the Python workers.  This would prevent the workers 
from becoming zombie processes if they overrode the SIGHUP handlers or became 
deadlocked in C code and were unable to respond to signals.
    
    Any edge-cases in the SIGHUP handling here should also be present in the 
original version, so I'm inclined to merge this patch now and revisit the 
SIGKILL suggestion in a later patch.  Also, we may want to take additional time 
to consider whether we want to support cleaner termination of workers (e.g. 
first attempt to shut them down gracefully so `atexit` handlers or `finally` 
blocks have a chance to run, and eventually send a SIGKILL to guarantee 
termination).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to