[
https://issues.apache.org/jira/browse/HADOOP-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Devaraj Das updated HADOOP-2141:
--------------------------------
Attachment: 2141.6.patch
Attaching a patch with minor changes. For a sort job on a ~200 node cluster,
the number of speculative tasks launched with this patch is only ~10% of the
number of task launches with the trunk. The job run time is almost the same.
From the tasks that is chosen, I've seen at least 30% accuracy in correctly
choosing the tasks for speculation. In some cases, I even saw 100%.
> speculative execution start up condition based on completion time
> -----------------------------------------------------------------
>
> Key: HADOOP-2141
> URL: https://issues.apache.org/jira/browse/HADOOP-2141
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Affects Versions: 0.21.0
> Reporter: Koji Noguchi
> Assignee: Andy Konwinski
> Fix For: 0.21.0
>
> Attachments: 2141.4.patch, 2141.5.patch, 2141.6.patch, 2141.patch,
> HADOOP-2141-v2.patch, HADOOP-2141-v3.patch, HADOOP-2141-v4.patch,
> HADOOP-2141-v5.patch, HADOOP-2141-v6.patch, HADOOP-2141.patch,
> HADOOP-2141.v7.patch, HADOOP-2141.v8.patch
>
>
> We had one job with speculative execution hang.
> 4 reduce tasks were stuck with 95% completion because of a bad disk.
> Devaraj pointed out
> bq . One of the conditions that must be met for launching a speculative
> instance of a task is that it must be at least 20% behind the average
> progress, and this is not true here.
> It would be nice if speculative execution also starts up when tasks stop
> making progress.
> Devaraj suggested
> bq. Maybe, we should introduce a condition for average completion time for
> tasks in the speculative execution check.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.