Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/1056#discussion_r15527486
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -56,7 +56,7 @@ private[jobs] object UIData {
}
case class TaskUIData(
- taskInfo: TaskInfo,
- taskMetrics: Option[TaskMetrics] = None,
- errorMessage: Option[String] = None)
+ var taskInfo: TaskInfo,
+ var taskMetrics: Option[TaskMetrics] = None,
+ var errorMessage: Option[String] = None)
--- End diff --
If we have 100 executors with 24 cores and a heartbeat interval of 2
seconds, we'll end up with 1200 objects per second. That doesn't seem enormous
to me, but also does seem like unnecessary overhead. Are there hidden costs in
Scala to keeping this mutable?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---