Github user CodingCat commented on the pull request:

    https://github.com/apache/spark/pull/6263#issuecomment-159640761
  
    In the current version of patch, we use expiration time to prevent too many 
dead executors from appearing on the UI. It brings inconvenient overhead which 
makes the UI component to have a dependency on Guava...Additionally, there are 
cases that the executors are failed and restarted time and time again within a 
very short period (I met this in some of my applications when I introduced some 
bug, cannot remember what exactly happened)
    
    I'm considering that we might be able to just cap the maximum number of 
rows in the table, like what we do in many other places (master/worker UI, 
etc.). Event we stick to the expiration time, TimeStampedHashMap might be a 
cleaner solution?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to