[
https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614252#action_12614252
]
eric baldeschwieler commented on HADOOP-3581:
---------------------------------------------
A couple of comments:
As I understand this the job specifies is RAM requirements as a % of
RAM on a TT? That doesn't fly. A user should specify the MAX RAM in
GB or MB that the tasks will use. %s are not the right model. IE
may tasks will use no more than 1.5GB
I don't think a per job limit on % RAM to map / reduce works. Better
to just specify the biggest MAP or REDUCE the cluster can support.
E14
> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
> Key: HADOOP-3581
> URL: https://issues.apache.org/jira/browse/HADOOP-3581
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Vinod Kumar Vavilapalli
> Attachments: patch_3581_0.1.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive,
> maybe due to some inadvertent bugs in the user code, or the amount of data
> processed. When this happens, the user tasks start to interfere with the
> proper execution of other processes on the node, including other Hadoop
> daemons like the DataNode and TaskTracker. Thus, the node would become
> unusable for any Hadoop tasks. There should be a way to prevent such tasks
> from bringing down the node.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.