Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3121#issuecomment-62459306
I've come up with a slightly modified approach which I think strikes a nice
balance:
- At the start of the constructor, throw an exception if we know for sure
that another SparkContext is active in this JVM.
- If another SparkContext might be under construction (or has thrown an
exception during construction), allow the new SparkContext to begin
construction but log a warning.
- At the _end_ of the SparkContext constructor, check whether some other
SparkContext has raced with us and won (thus becoming the active context). If
so, throw an exception.
Basically, we're guaranteed that no two SparkContexts will ever be active
and exposed to users (since we check at the very end of the constructor). In
cases where the SparkContext constructor throws an exception, we'll log a
warning since resources might have been leaked, but we won't prevent future
attempts to create SparkContexts.
Does this sound like a reasonable approach?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]