Hi,
I want to load a serialized HashMap object in hadoop. The file of stored
object is 200M. I could read that object efficiently in JAVA by setting
-Xmx as 1000M. However, in hadoop I could never load it into memory.
The code is very simple (just read the ObjectInputStream) and there is
yet no map/reduce implemented. I set the
mapred.child.java.opts=-Xmx3000M, still get the
"java.lang.OutOfMemoryError: Java heap space" Could anyone explain a
little bit how memory is allocate to JVM in hadoop. Why hadoop takes up
so much memory? If a program requires 1G memory on a single node, how
much memory it requires (generally) in Hadoop?
Thanks.
Shi
--
- load a serialized object in hadoop Shi Yu
-