[ 
https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615679#action_12615679
 ] 

Hemanth Yamijala commented on HADOOP-3581:
------------------------------------------

Some more details on the configuration items:

mapred.tasktracker.tasks.maxmemory: We can default this to a value like 
Long.MAX_VALUE, or maybe even -1L, to disable this feature. Only if it is 
different from default, will memory monitoring be done.

Regarding mapred.map.task.maxmemory, one thought is whether we need separate 
items for map and reduce tasks, or can we just do with one item, such as 
mapred.task.maxmemory, which will define the maximum value that will be taken 
by any task (Map or Reduce) in the job. If it is typical that one type of task 
(say Reduce), has significantly different memory requirements than the other, 
then two items may be required. Comments ?



> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: patch_3581_0.1.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, 
> maybe due to some inadvertent bugs in the user code, or the amount of data 
> processed. When this happens, the user tasks start to interfere with the 
> proper execution of other processes on the node, including other Hadoop 
> daemons like the DataNode and TaskTracker. Thus, the node would become 
> unusable for any Hadoop tasks. There should be a way to prevent such tasks 
> from bringing down the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to