Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15986#discussion_r89252629
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -89,9 +89,11 @@ private[spark] class TaskSchedulerImpl(
val nextTaskId = new AtomicLong(0)
// Number of tasks running on each executor
- private val executorIdToTaskCount = new HashMap[String, Int]
+ private val executorIdToRunningTaskIds = new HashMap[String,
HashSet[Long]]
--- End diff --
To pre-emptively address any concerns about the memory usage implications
of this change, note that the hash set sizes should be bounded by the number of
cores / task slots on the executor, so I don't think that there's much to be
gained by using a more memory-efficient HashMap structure here. The only real
optimization that I could think of would be to replace the map with a
fixed-size array that we just linearly scan, but that seems like premature
optimization and adds a lot of hard-to-reason-about complexity.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]