[
https://issues.apache.org/jira/browse/HADOOP-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623414#action_12623414
]
Owen O'Malley commented on HADOOP-3759:
---------------------------------------
I'd suggest that:
* ResourceStatus class be static and package private instead of non-static
and private
* Add a package private API to get the ResourceStatus, but remove the other
public api methods.
* TaskTrackerStatus should not create a new ResourceStatus in readFields.
* The JobInProgress should cache the value rather than looking it up in the
config and parsing it each time. You've already added the
getMaxVirtualMemoryForTask that you'll need to access it.
* The task tracker should also compute the value once and reuse it rather
than recalculate it each time getDefaultMemoryPerTask is called.
> Provide ability to run memory intensive jobs without affecting other running
> tasks on the nodes
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3759
> URL: https://issues.apache.org/jira/browse/HADOOP-3759
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Hemanth Yamijala
> Fix For: 0.19.0
>
> Attachments: HADOOP-3759.patch, HADOOP-3759.patch, HADOOP-3759.patch,
> HADOOP-3759.patch, HADOOP-3759.patch
>
>
> In HADOOP-3581, we are discussing how to prevent memory intensive tasks from
> affecting Hadoop daemons and other tasks running on a node. A related
> requirement is that users be provided an ability to run jobs which are memory
> intensive. The system must provide enough knobs to allow such jobs to be run
> while still maintaining the requirements of HADOOP-3581.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.