Steve,

Perhaps raise your reducer slowstart config to wait till about 90% of
a Job's mappers complete, such that they may start later (almost when
all map cycles are done?)

The prop name is mapred.reduce.slowstart.completed.maps, and the default is 10%.

On Thu, Nov 10, 2011 at 8:36 AM, Steve Lewis <lordjoe2...@gmail.com> wrote:
> Hadoop can set the maximum mappers and reducers running on a node but under
> 0.20.2 I do not see a way to limit
> the system from running mappers and reducers together with the total
> exceeding individual limits.
> I find that when my mappers are about 50% done the system kicks off
> reducers. I have raised the maxmemory in
> chilld.java,vm.opts because I have been hitting GC limits and the values
> work well when I am running 6 mappers OR
> 6 reducers but when my mappers are half way done I see 6 mappers AND 6
> reducers running and this challenges the
> total memory on a node.
> How can I keep the total tasks on a node under control without limiting the
> maximum mappers and reducers to half the total I want??
> --
> Steven M. Lewis PhD
> 4221 105th Ave NE
> Kirkland, WA 98033
> 206-384-1340 (cell)
> Skype lordjoe_com
>
>
>



-- 
Harsh J

Reply via email to