Github user ajbozarth commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12990#discussion_r73247048
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
    @@ -137,6 +138,17 @@ class JobProgressListener(conf: SparkConf) extends 
SparkListener with Logging {
         )
       }
     
    +  /** If Tasks is too large, remove and garbage collect old tasks */
    +  private def trimTasksIfNecessary(taskData: HashMap[Long, TaskUIData]) = 
synchronized {
    +    if (taskData.size > retainedTasks) {
    +      val toRemove = math.max(retainedTasks / 10, 1)
    +      val oldIds = 
taskData.map(_._2.taskInfo.taskId).toList.sorted.take(toRemove)
    --- End diff --
    
    Could you explain your reasoning behind doing this different than the 
equivalent stages and jobs functions below? This just seems a bit redundant 
comparatively (doing all this to get oldIds then going through a loop, rather 
than using trimStart) or I may just be missing some key scala understanding


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to