Can you configure mapreduce_shuffle class name as below and check

<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>


-----Original Message-----
From: Vincent,Wei [mailto:[email protected]] 
Sent: 25 March 2014 10:05 AM
To: [email protected]
Subject: About Map 100% reduce %0 issue

All

I am a new comer for Hadoop, I have run
the hadoop-mapreduce-examples-2.2.0.jar wordcount, but the result is that it 
always pending at map 100% and reduce %0.

14/03/25 20:19:20 INFO client.RMProxy: Connecting to ResourceManager at
master/159.99.249.63:8032
14/03/25 20:19:20 INFO input.FileInputFormat: Total input paths to process
: 1
14/03/25 20:19:20 INFO mapreduce.JobSubmitter: number of splits:1
14/03/25 20:19:20 INFO Configuration.deprecation: user.name is deprecated.
Instead, use mapreduce.job.user.name
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.jar is deprecated.
Instead, use mapreduce.job.jar
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.output.value.class is 
deprecated. Instead, use mapreduce.job.output.value.class
14/03/25 20:19:20 INFO Configuration.deprecation: mapreduce.combine.class is 
deprecated. Instead, use mapreduce.job.combine.class
14/03/25 20:19:20 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/25 20:19:20 INFO Configuration.deprecation: mapreduce.reduce.class is 
deprecated. Instead, use mapreduce.job.reduce.class
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.output.key.class is 
deprecated. Instead, use mapreduce.job.output.key.class
14/03/25 20:19:20 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
14/03/25 20:19:20 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1395747600383_0002
14/03/25 20:19:20 INFO impl.YarnClientImpl: Submitted application
application_1395747600383_0002 to ResourceManager at master/
159.99.249.63:8032
14/03/25 20:19:20 INFO mapreduce.Job: The url to track the job:
http://master:8088/proxy/application_1395747600383_0002/
14/03/25 20:19:20 INFO mapreduce.Job: Running job: job_1395747600383_0002
14/03/25 20:19:24 INFO mapreduce.Job: Job job_1395747600383_0002 running in 
uber mode : false
14/03/25 20:19:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/25 20:19:28 INFO mapreduce.Job:  map 100% reduce 0%
14/03/25 20:19:31 INFO mapreduce.Job: Task Id :
attempt_1395747600383_0002_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
shuffle in fetcher#5
        at
org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
        at
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:323)
        at
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:245)
        at
org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:347)
        at
org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)

someone says that this is caused by hosts configure .I have checked my 
/etc/hosts  on all Mater & slaves:
127.0.0.1       localhost.localdomain localhost
159.99.249.63   master
159.99.249.203  slave1
159.99.249.99   slave2
159.99.249.88   slave3

Would you please help me to fix the issue, many thanks .

my yarn-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<configuration>

<property>
<description>The hostname of the RM.</description> 
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<description>The address of the container manager in the NM.</description> 
<name>yarn.nodemanager.address</name>
<value>${yarn.nodemanager.hostname}:8041</value>
</property>

</configuration>

my mapred-site.xml

<configuration>
        <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        </property>

<property>
  <name>mapreduce.reduce.shuffle.merge.percent</name>
  <value>0.33</value>
  <description>The usage threshold at which an in-memory merge will be
  initiated, expressed as a percentage of the total memory allocated to
  storing in-memory map outputs, as defined by
  mapreduce.reduce.shuffle.input.buffer.percent.
  </description>
</property>

<property>
  <name>mapreduce.reduce.shuffle.input.buffer.percent</name>
  <value>0.35</value>
  <description>The percentage of memory to be allocated from the maximum heap
  size to storing map outputs during the shuffle.
  </description>
</property>

<property>
  <name>mapreduce.reduce.shuffle.memory.limit.percent</name>
  <value>0.12</value>
  <description>Expert: Maximum percentage of the in-memory limit that a
  single shuffle can consume</description> </property>

</configuration>


--
BR,

Vincent.Wei

Reply via email to