Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/228#discussion_r10951648
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -814,8 +819,12 @@ class DAGScheduler(
event.reason match {
case Success =>
logInfo("Completed " + task)
- if (event.accumUpdates != null) {
- Accumulators.add(event.accumUpdates) // TODO: do this only if
task wasn't resubmitted
+ if (!stageIdToAccumulators.contains(stage.id) ||
+ stageIdToAccumulators(stage.id).size < stage.numPartitions) {
+ stageIdToAccumulators.getOrElseUpdate(stage.id, new
ListBuffer[(Long, Any)])
+ for ((id, value) <- event.accumUpdates) {
+ stageIdToAccumulators(stage.id) += id -> value
--- End diff --
How does this work when tasks in a stage get retried? For example,
consider a stage with with 3 tasks, where first task 1 finishes, then task 2
finishes, then a duplicate, speculated version of task 2 finishes, and then
task 3 finishes. It seems like the accumulator values from task 2 will be
added twice and the values from task 3 will never be added?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---