[
https://issues.apache.org/jira/browse/HADOOP-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
V.V.Chaitanya Krishna updated HADOOP-6439:
------------------------------------------
Attachment: HADOOP-6439-3.patch
Uploading new patch with the above comments implemented.
I tested it on trunk for the scenario mentioned by Owen in the above comments.
It worked successfully.
I'm not able to test it on 0.21 due to some problem with tasktracker crashing (
logs show {{java.lang.NoSuchMethodError:
org.apache.hadoop.ipc.RPC.waitForProxy}} ).
> Shuffle deadlocks on wrong number of maps
> -----------------------------------------
>
> Key: HADOOP-6439
> URL: https://issues.apache.org/jira/browse/HADOOP-6439
> Project: Hadoop Common
> Issue Type: Bug
> Components: conf
> Affects Versions: 0.21.0, 0.22.0
> Reporter: Owen O'Malley
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6439-1.patch, HADOOP-6439-2.patch,
> HADOOP-6439-3.patch, mr-1252.patch
>
>
> The new shuffle assumes that the number of maps is correct. The new
> JobSubmitter sets the old value. Something misfires in the middle causing:
> 09/12/01 00:00:15 WARN conf.Configuration: mapred.job.split.file is
> deprecated. Instead, use mapreduce.job.splitfile
> 09/12/01 00:00:15 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
> But my reduces got stuck at 2 maps / 12 when there were only 2 maps in the
> job.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.