1) Yes option 2 is enough.
2) Configuration variable "mapred.child.ulimit" can be used to control
the maximum virtual memory of the child (map/reduce) processes.

** value of mapred.child.ulimit > value of mapred.child.java.opts

On Thu, Feb 16, 2012 at 12:38 AM, Mark question <[email protected]> wrote:
> Thanks for the reply Srinivas, so option 2 will be enough, however, when I
> tried setting it to 512MB, I see through the system monitor that the map
> task is given 275MB of real memory!!
> Is that normal in hadoop to go over the upper bound of memory given by the
> property mapred.child.java.opts.
>
> Mark
>
> On Wed, Feb 15, 2012 at 4:00 PM, Srinivas Surasani <[email protected]> wrote:
>
>> Hey Mark,
>>
>> Yes, you can limit the memory for each task with
>> "mapred.child.java.opts" property. Set this to final if no developer
>> has to change it .
>>
>> Little intro to "mapred.task.default.maxvmem"
>>
>> This property has to be set on both the JobTracker  for making
>> scheduling decisions and on the TaskTracker nodes for the sake of
>> memory management. If a job doesn't specify its virtual memory
>> requirement by setting mapred.task.maxvmem to -1, tasks are assured a
>> memory limit set to this property. This property is set to -1 by
>> default. This value should in general be less than the cluster-wide
>> configuration mapred.task.limit.maxvmem. If not or if it is not set,
>> TaskTracker's memory management will be disabled and a scheduler's
>> memory based scheduling decisions may be affected.
>>
>> On Wed, Feb 15, 2012 at 5:57 PM, Mark question <[email protected]>
>> wrote:
>> > Hi,
>> >
>> >  My question is what's the difference between the following two settings:
>> >
>> > 1. mapred.task.default.maxvmem
>> > 2. mapred.child.java.opts
>> >
>> > The first one is used by the TT to monitor the memory usage of tasks,
>> while
>> > the second one is the maximum heap space assigned for each task. I want
>> to
>> > limit each task to use upto say 100MB of memory. Can I use only #2 ??
>> >
>> > Thank you,
>> > Mark
>>
>>
>>
>> --
>> -- Srinivas
>> [email protected]
>>



-- 
-- Srinivas
[email protected]

Reply via email to