[
https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12609310#action_12609310
]
Allen Wittenauer commented on HADOOP-3581:
------------------------------------------
What prevents the task tracker from overflowing memory itself? Considering
the memory leaks we've already seen in the name node, I don't trust the task
trackers to be leak free either.
One of the advantages that we have with HOD presently is that because the limit
is set prior to the task tracker getting launched is that the task tracker
itself is bounded. This makes the *entire* hadoop task chain limited and not
just individual portions.
Whatever system is designed needs to mimic this functionality.
> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
> Key: HADOOP-3581
> URL: https://issues.apache.org/jira/browse/HADOOP-3581
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Vinod Kumar Vavilapalli
>
> Sometimes user Map/Reduce applications can get extremely memory intensive,
> maybe due to some inadvertent bugs in the user code, or the amount of data
> processed. When this happens, the user tasks start to interfere with the
> proper execution of other processes on the node, including other Hadoop
> daemons like the DataNode and TaskTracker. Thus, the node would become
> unusable for any Hadoop tasks. There should be a way to prevent such tasks
> from bringing down the node.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.