[
https://issues.apache.org/jira/browse/HADOOP-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623286#action_12623286
]
Devaraj Das commented on HADOOP-3759:
-------------------------------------
I am okay with the patch. The one thing i do want to point out is that the
ResourceStatus could be made extensible and any component in the TT that wants
to advertise a resource Key/Value info can do so (as opposed to hardcoding the
memory/disk-space resources only). But this could be for later.
The other thing is the way we handle -Xmx in this setup. Assume a case where
the user hasn't specified any memory requirement for his job. The memory that a
task would get is proportional to the amount of memory in the TT/#slots. Let's
say for this cluster instance, it is 1G. Now if his -Xmx, which is an absolute
number, is above this, say 1.5G, would it work? Note that the task JVM might
work even with 1G. It is just the user happened to specify it as 1.5G.
> Provide ability to run memory intensive jobs without affecting other running
> tasks on the nodes
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3759
> URL: https://issues.apache.org/jira/browse/HADOOP-3759
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Hemanth Yamijala
> Fix For: 0.19.0
>
> Attachments: HADOOP-3759.patch, HADOOP-3759.patch, HADOOP-3759.patch,
> HADOOP-3759.patch, HADOOP-3759.patch
>
>
> In HADOOP-3581, we are discussing how to prevent memory intensive tasks from
> affecting Hadoop daemons and other tasks running on a node. A related
> requirement is that users be provided an ability to run jobs which are memory
> intensive. The system must provide enough knobs to allow such jobs to be run
> while still maintaining the requirements of HADOOP-3581.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.