Thank you that worked!

Juan

On Mon, Apr 2, 2012 at 12:55 PM, Harsh J <ha...@cloudera.com> wrote:

> For 1.0, the right property is "mapred.reduce.child.java.opts". The
> "mapreduce.*" style would apply to MR in 2.0 and above.
>
> On Mon, Apr 2, 2012 at 3:00 PM, Juan Pino <juancitomiguel...@gmail.com>
> wrote:
> > Hello,
> >
> > I have a job that requires a bit more memory than the default for the
> > reducer (not for the mapper).
> > So for this I have this property in my configuration file:
> >
> > mapreduce.reduce.java.opts=-Xmx4000m
> >
> > When I run the job, I can see its configuration in the web interface and
> I
> > see that indeed I have mapreduce.reduce.java.opts set to -Xmx4000m
> > but I also have mapred.child.java.opts set to -Xmx200m and when I ps -ef
> > the java process, it is using -Xmx200m.
> >
> > So to make my job work I had to set mapred.child.java.opts=-Xmx4000m in
> my
> > configuration file.
> > However I don't need that much memory for the mapper.
> > How can I set more memory only for the mapper ? Is the only solution to
> set
> > mapred.child.java.opts to -Xmx4000m, mapreduce.reduce.java.opts to
> -Xmx4000m
> > and mapreduce.map.java.opts to -Xmx200m ?
> >
> > I am using hadoop 1.0.1.
> >
> > Thank you very much,
> >
> > Juan
>
>
>
> --
> Harsh J
>

Reply via email to