Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3121#discussion_r20413759
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -57,12 +57,27 @@ import org.apache.spark.util._
* Main entry point for Spark functionality. A SparkContext represents the
connection to a Spark
* cluster, and can be used to create RDDs, accumulators and broadcast
variables on that cluster.
*
+ * Only one SparkContext may be active per JVM. You must `stop()` the
active SparkContext before
+ * creating a new one. This limitation will eventually be removed; see
SPARK-2243 for more details.
+ *
* @param config a Spark Config object describing the application
configuration. Any settings in
* this config overrides the default configs as well as system
properties.
*/
class SparkContext(config: SparkConf) extends SparkStatusAPI with Logging {
+ // The call site where this SparkContext was constructed.
+ private val creationSite: CallSite = Utils.getCallSite()
+
+ // If true, log warnings instead of throwing exceptions when multiple
SparkContexts are active
+ private val allowMultipleContexts: Boolean =
+ config.getBoolean("spark.driver.allowMultipleContexts", false)
+
+
+ // In order to prevent multiple SparkContexts from being active at the
same time, mark this
+ // context as having started construction
+ SparkContext.markPartiallyConstructed(this, allowMultipleContexts)
--- End diff --
and similar for the corresponding call at the end?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]