Can you share with us your completed stack?
- Java and Hadoop version?
- Config files?
- Cluster distribution? (Numbers of NN and DN)
- O.S
If it is possible, your hardware too.

Best wishes

On 07/06/2012 10:57 AM, Robert Evans wrote:
What version of hadoop are you using?

From: Stephen Boesch <java...@gmail.com <mailto:java...@gmail.com>>
Reply-To: "mapreduce-user@hadoop.apache.org <mailto:mapreduce-user@hadoop.apache.org>" <mapreduce-user@hadoop.apache.org <mailto:mapreduce-user@hadoop.apache.org>> To: "mapreduce-user@hadoop.apache.org <mailto:mapreduce-user@hadoop.apache.org>" <mapreduce-user@hadoop.apache.org <mailto:mapreduce-user@hadoop.apache.org>>
Subject: Job exceeded Reduce Input limit


I am running a (terasort) job on a small cluster but with powerful nodes. The number of reducer slots was 12. I am seeing the following message: Job JOBID="job_201207031814_0011" FINISH_TIME="1341389866650" JOB_STATUS="FAILED" FINISHED_MAPS="42" FINISHED_REDUCES="0" FAIL_REASON="Job exceeded Reduce Input limit Limit: 10737418240 Estimated: 102000004905" .


Now this apparently was added recently:

http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/201103.mbox/%3c20110304042718.5854e2388...@eris.apache.org%3E


It looks that the solution would be to set mapreduce.reduce.input.limit to -1:


  <property>
+  <name>mapreduce.reduce.input.limit</name>
+  <value>-1</value>
+  <description>The limit on the input size of the reduce. If the estimated
+  input size of the reduce is greater than this value, job is failed. A
+  value of -1 means that there is no limit set. </description>
+</property>
I did that (in mapred-site.xml). But it did not affect the behavior  i.e. the 
problem continues.
Any hints appreciated.
thx!


<http://www.uci.cu/>





10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Reply via email to