zhengchenyu opened a new pull request, #7239: URL: https://github.com/apache/hadoop/pull/7239
### Description of PR I found that PublicLocalizer is exiting, because of the /tmp directory was deleted by mistake, then throw NPE, then causing spark job to be stuck. [YARN-9968](https://issues.apache.org/jira/browse/YARN-9968) have resolve the NPE problem. For me, I think when the `PublicLocalizer` thread exits, the NM should be shut down, because there is no point in keeping an abnormal NM running. ### How was this patch tested? manual test ### For code changes: - [x] When PublicLocalizer is exiting, shutdown the NodeManager. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
