He Tianyi created HDFS-9725:
-------------------------------
Summary: Make capacity of centralized cache dynamic
Key: HDFS-9725
URL: https://issues.apache.org/jira/browse/HDFS-9725
Project: Hadoop HDFS
Issue Type: Wish
Components: caching, datanode, namenode
Affects Versions: 2.6.0
Reporter: He Tianyi
Assignee: He Tianyi
Priority: Minor
Currently in centralized cache, Datanode uses {{mlock}} to keep blocks in
memory, with limit of maximum amount of bytes specified by
{{dnConf.maxLockedMemory}}.
In general deployment, each machine run both Datanode and Nodemanager. In this
case, statically specified memory capacity either potentially causes OOM, or
hurts memory utilization.
That is, if one specify a large capacity for caching (permitted by {{ulimit}}
as prerequisite), Datanode may have gone too far to reserve any memory for new
container process to launch from NodeManager. On the other hand, specifying a
small value may leave memory insufficiently used.
A simple idea is: perhaps it is better to make cache capacity dynamic.
Adjusting its capacity corresponding to current (or future, ideally) memory
usage to avoid problems above.
Any suggestions or comments?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)