>
>
> On Tue, Oct 12, 2010 at 4:50 AM, Shi Yu <[email protected]> wrote:
>
> > Hi,
> >
> > I want to load a serialized HashMap object in hadoop. The file of stored
> > object is 200M. I could read that object efficiently in JAVA by setting
> -Xmx
> > as 1000M.  However, in hadoop I could never load it into memory. The code
> is
> > very simple (just read the ObjectInputStream) and there is yet no
> map/reduce
> > implemented.  I set the  mapred.child.java.opts=-Xmx3000M, still get the
> > "java.lang.OutOfMemoryError: Java heap space"  Could anyone explain a
> little
> > bit how memory is allocate to JVM in hadoop. Why hadoop takes up so much
> > memory?  If a program requires 1G memory on a single node, how much
> memory
> > it requires (generally) in Hadoop?
>

The JVM reserves swap space in advance, at the time of launching the
process. If your swap is too low (or do not have any swap configured), you
will hit this.

Or, you are on a 32-bit machine, in which case 3G is not possible in the
JVM.

-Srivas.



> >
> > Thanks.
> >
> > Shi
> >
> > --
> >
> >
>

Reply via email to