Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/9633#issuecomment-155973129
Based on http://bugs.java.com/view_bug.do?bug_id=4073195, it sounds like
many *nix implementations of `Process.destroy()` work by sending `SIGTERM` to
the child process. I suppose that anything that caused SIGTERM to be swallowed
/ ignored by one of the child processes could keep this from working on Java 7.
PySpark used to be vulnerable to similar problems, so it includes a test case
which specifically checks the `SIGTERM`-handling behavior:
https://github.com/apache/spark/blob/b8ff6888e76b437287d7d6bf2d4b9c759710a195/python/pyspark/tests.py#L1580
I commented out the `handle.stop()` call and verified that the child
process stops almost immediately under Java 7, so it appears that this has
fixed the issue. I suppose that we could try adding regression tests, but I'd
also be fine doing that as a followup; I'd like to try to get this fix in
sooner rather than later given the impact that it will have on Jenkins
performance.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]