[ https://issues.apache.org/jira/browse/SPARK-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15004527#comment-15004527 ]
Reynold Xin commented on SPARK-8029: ------------------------------------ [~davies] can you update the jira ticket description with the high level approach used in the fix? > ShuffleMapTasks must be robust to concurrent attempts on the same executor > -------------------------------------------------------------------------- > > Key: SPARK-8029 > URL: https://issues.apache.org/jira/browse/SPARK-8029 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.4.0 > Reporter: Imran Rashid > Assignee: Davies Liu > Priority: Critical > Fix For: 1.6.0 > > Attachments: > AlternativesforMakingShuffleMapTasksRobusttoMultipleAttempts.pdf > > > When stages get retried, a task may have more than one attempt running at the > same time, on the same executor. Currently this causes problems for > ShuffleMapTasks, since all attempts try to write to the same output files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org