[
https://issues.apache.org/jira/browse/SPARK-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Josh Rosen updated SPARK-4787:
------------------------------
Summary: Clean up resources in SparkContext if errors occur during
DAGScheduler initialization (was: Resource unreleased during failure in
SparkContext initialization)
> Clean up resources in SparkContext if errors occur during DAGScheduler
> initialization
> -------------------------------------------------------------------------------------
>
> Key: SPARK-4787
> URL: https://issues.apache.org/jira/browse/SPARK-4787
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 1.1.0
> Reporter: Jacky Li
> Fix For: 1.3.0
>
>
> When client creates a SparkContext, currently there are many val to
> initialize during object initialization. But when there is failure
> initializing these val, like throwing an exception, the resources in this
> SparkContext is not released properly.
> For example, SparkUI object is created and bind to the HTTP server during
> initialization using
> {{ui.foreach(_.bind())}}
> but if anything goes wrong after this code (say throwing an exception when
> creating DAGScheduler), the SparkUI server is not stopped, thus the port bind
> will fail again in the client when creating another SparkContext. So
> basically this leads to a situation that the client can not create another
> SparkContext in the same process, which I think it is not reasonable.
> So, I suggest to refactor the SparkContext code to release resource when
> there is failure during in initialization.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]