Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1679#issuecomment-50946913
One thing that is still slightly worrying is that every time we refresh the
executors page, there is still a noticeable delay if we have persisted tens of
thousands of blocks. This is because we still iterate through all the blocks
there to find out how much memory each executor currently uses. We can fix this
by incrementally updating memory and disk usage for non-RDD blocks similarly to
what we have done in this PR for RDD blocks. However, this issue does not
affect the event consumption rate and I may fix this separately.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---