[ 
https://issues.apache.org/jira/browse/SPARK-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Ganelin updated SPARK-6616:
--------------------------------
    Description: 
There are numerous instances throughout the code base of the following:

{code}
if (!stopped) {
    stopped = true
    ...
}
{code}

In general, this is bad practice since it can cause an incomplete cleanup if 
there is an error during shutdown and not all code executes. Incomplete cleanup 
is harder to track down than a double cleanup that triggers some error. I 
propose fixing this throughout the code, starting with the cleanup sequence 
with {code}SparkContext.stop() {code}.

A cursory examination reveals this in {code}SparkContext.stop(), 
SparkEnv.stop(), and ContextCleaner.stop() {code}.



  was:
There are numerous instances throughout the code base of the following:

{code}
if (!stopped) {
    stopped = true
    ...
}
{code}

In general, this is bad practice since it can cause an incomplete cleanup if 
there is an error during shutdown and not all code executes. Incomplete cleanup 
is harder to track down than a double cleanup that triggers some error. I 
propose fixing this throughout the code, starting with the cleanup sequence 
with {{code}}SparkContext.stop() {{code}}.

A cursory examination reveals this in {{code}}SparkContext.stop(), 
SparkEnv.stop(), and ContextCleaner.stop() {{code}}.




> IsStopped set to true in before stop() is complete.
> ---------------------------------------------------
>
>                 Key: SPARK-6616
>                 URL: https://issues.apache.org/jira/browse/SPARK-6616
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.3.0
>            Reporter: Ilya Ganelin
>
> There are numerous instances throughout the code base of the following:
> {code}
> if (!stopped) {
>     stopped = true
>     ...
> }
> {code}
> In general, this is bad practice since it can cause an incomplete cleanup if 
> there is an error during shutdown and not all code executes. Incomplete 
> cleanup is harder to track down than a double cleanup that triggers some 
> error. I propose fixing this throughout the code, starting with the cleanup 
> sequence with {code}SparkContext.stop() {code}.
> A cursory examination reveals this in {code}SparkContext.stop(), 
> SparkEnv.stop(), and ContextCleaner.stop() {code}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to