HyukjinKwon commented on a change in pull request #25315:
[SPARK-28582][PYSPARK] Fix pyspark daemon exit failed when receive SIGTERM on
Python 3.7
URL: https://github.com/apache/spark/pull/25315#discussion_r309483871
##########
File path: python/pyspark/daemon.py
##########
@@ -102,7 +102,7 @@ def shutdown(code):
signal.signal(SIGTERM, SIG_DFL)
# Send SIGHUP to notify workers of shutdown
os.kill(0, SIGHUP)
- sys.exit(code)
+ os._exit(code)
Review comment:
Per the Python doc:
> _exit() should normally only be used in the child process after a fork().
Do we still trigger the cleanup handling as before? I suspect it fails to
terminate its forked processes for some reasons and `os._exit` seems not
guaranteeing to terminate the workers.
Can you check if they are really dead via manually checking opened file
descriptors? You can check it via `lsof` if you're using Mac.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]