Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20955#discussion_r178677153
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
    @@ -39,7 +39,7 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
         val failed = new mutable.ArrayBuffer[SQLExecutionUIData]()
     
         sqlStore.executionsList().foreach { e =>
    -      val isRunning = e.jobs.exists { case (_, status) => status == 
JobExecutionStatus.RUNNING }
    +      val isRunning = e.completionTime.isEmpty
    --- End diff --
    
    I think you actually need both checks. The code in 2.2 is this:
    
    ```
          if (executionUIData.completionTime.nonEmpty && 
!executionUIData.hasRunningJobs) {
            // We are the last job of this execution, so mark the execution as 
finished. Note that
            // `onExecutionEnd` also does this, but currently that can be 
called before `onJobEnd`
            // since these are called on different threads.
            markExecutionFinished(executionId)
          }
    ```
    
    The original reason why this code is like this is that job events and sql 
execution events could arrive out of order; I don't know if that is still true, 
but I tried to maintain the same workarounds in the new code.
    
    If the out-of-order issue exists, then your change would introduce the 
opposite problem: an execution marked as completed when existing known jobs are 
still running, because the execution end event arrived before the job end event.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to