Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-77037115
@o-mdr This is not a race condition, so there are no concurrency issues
here. The issue is that Spark calls `sc.stop()` internally sometimes when
something bad happens, and then the application may decide to call `sc.stop()`
again after that because it's what we recommend. This results in a benign error
being logged that does not convey the root cause of the problem.
Also, note that any test we add here is not particularly meaningful because
it would have passed even before this patch. For this reason I decided not to
add one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]