[
https://issues.apache.org/jira/browse/MAPREDUCE-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated MAPREDUCE-3205:
-----------------------------------
Attachment: mr-3205.txt
Fix the failing unit test: it had a match against the error message text which
needed to be updated.
> MR2 memory limits should be pmem, not vmem
> ------------------------------------------
>
> Key: MAPREDUCE-3205
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3205
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Components: mrv2, nodemanager
> Affects Versions: 0.23.0
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Attachments: mr-3205.txt, mr-3205.txt, mr-3205.txt, mr-3205.txt,
> mr-3205.txt
>
>
> Currently, the memory resources requested for a container limit the amount of
> virtual memory used by the container. On my test clusters, at least, Java
> processes take up nearly twice as much vmem as pmem - a Java process running
> with -Xmx500m uses 935m of vmem and only about 560m of pmem.
> This will force admins to either under-utilize available physical memory, or
> oversubscribe it by configuring the available resources on a TT to be larger
> than the true amount of physical RAM.
> Instead, I would propose that the resource limit apply to pmem, and allow the
> admin to configure a "vmem overcommit ratio" which sets the vmem limit as a
> function of pmem limit.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira