Dieter De Witte created HADOOP-10042:
----------------------------------------
Summary: Heap space error during copy from maptask to reduce task
Key: HADOOP-10042
URL: https://issues.apache.org/jira/browse/HADOOP-10042
Project: Hadoop Common
Issue Type: Bug
Components: conf
Affects Versions: 1.2.1
Environment: Ubuntu cluster
Reporter: Dieter De Witte
Fix For: 1.2.1
http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
I've described the problem on stackoverflow as well. It contains a link to
another JIRA:
http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
My errors are completely the same: out of memory error when
mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I put
it to 0.2, does this mean the original JIRA was not resolved?
Does anybody have an idea whether this is a mapreduce issue or is it a
misconfiguration from my part?
--
This message was sent by Atlassian JIRA
(v6.1#6144)