[
https://issues.apache.org/jira/browse/HADOOP-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623679#action_12623679
]
Hemanth Yamijala commented on HADOOP-3759:
------------------------------------------
New patch that deprecates getUlimitMemoryCommand, and updates documentation for
the same. Also, the behavior that Devaraj mentions is implemented with respect
to when the ulimit is set. Put another way, if an administrator has specified a
value for the tasktracker's memory limit, we give it higher priority than
ulimit setting. This seems to be the right thing to do. In the next release, we
can completely remove the ulimit configuration.
Also, I ran test-patch and get a -1 for the number of javac warnings as the
deprecation warning will come up now. This is expected, right ?
Comments from others on these changes ?
> Provide ability to run memory intensive jobs without affecting other running
> tasks on the nodes
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3759
> URL: https://issues.apache.org/jira/browse/HADOOP-3759
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Hemanth Yamijala
> Fix For: 0.19.0
>
> Attachments: HADOOP-3759.patch, HADOOP-3759.patch, HADOOP-3759.patch,
> HADOOP-3759.patch, HADOOP-3759.patch, HADOOP-3759.patch, HADOOP-3759.patch
>
>
> In HADOOP-3581, we are discussing how to prevent memory intensive tasks from
> affecting Hadoop daemons and other tasks running on a node. A related
> requirement is that users be provided an ability to run jobs which are memory
> intensive. The system must provide enough knobs to allow such jobs to be run
> while still maintaining the requirements of HADOOP-3581.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.