[ 
https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-3581:
--------------------------------------------

    Attachment: patch_3581_5.2.txt

Attaching modified patch.

 - Incorporated above comments -  - Added mapred.tasktracer.processtreeimpl 
which defaults to null. Also pid files are written only when TaskMemoryManager 
is enabled which is when both mapred.tasktracer.processtreeimpl and 
mapred.tasktracker.maxmemory are set. TT only refers to ProcessTree.
 - Added isZombie, isEmpty and getProcessTree to abstract class ProcessTree. 
getProcessTree replaces initializes(and reconstruct) and returns the 
ProcessTree with latest state.
 - Did a bit of refactoring of ProcfsBasedProcessTree so that all inherited 
methods are together at one place.

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: patch_3581_0.1.txt, patch_3581_3.3.txt, 
> patch_3581_4.3.txt, patch_3581_4.4.txt, patch_3581_5.0.txt, patch_3581_5.2.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, 
> maybe due to some inadvertent bugs in the user code, or the amount of data 
> processed. When this happens, the user tasks start to interfere with the 
> proper execution of other processes on the node, including other Hadoop 
> daemons like the DataNode and TaskTracker. Thus, the node would become 
> unusable for any Hadoop tasks. There should be a way to prevent such tasks 
> from bringing down the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to