Hey , 

Try adding 

  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx800M -server</value>
  </property>
 
With the right JVM size in your hadoop-site.xml , you will have to copy this
to all mapred nodes and restart the cluster.

Best
Bhupesh



On 4/29/09 2:03 PM, "Jasmine (Xuanjing) Huang" <xjhu...@cs.umass.edu> wrote:

> Hi, there,
> 
> What's the local heap size of Hadoop? I have tried to load a local cache
> file which is composed of 500,000 short phrase, but the task failed. The
> output of Hadoop looks like(com.aliasi.dict.ExactDictionaryChunker is a
> third-party jar package, and the records is organized as a trie struction):
> 
> java.lang.OutOfMemoryError: Java heap space
>         at java.util.HashMap.addEntry(HashMap.java:753)
>         at java.util.HashMap.put(HashMap.java:385)
>         at 
> com.aliasi.dict.ExactDictionaryChunker$TrieNode.getOrCreateDaughter(Ex
> actDictionaryChunker.java:476)
>         at 
> com.aliasi.dict.ExactDictionaryChunker$TrieNode.add(ExactDictionaryChu
> nker.java:484)
> 
> When I reduce the total record number to 30,000. My mapreduce job became
> succeed. So, I have a question, What's the local heap size of Hadoop's Java
> Virtual Machine? How to increase it?
> 
> Best,
> Jasmine 
> 

Reply via email to