Limiting memory usage on namenode
---------------------------------
Key: HADOOP-2340
URL: https://issues.apache.org/jira/browse/HADOOP-2340
Project: Hadoop
Issue Type: Improvement
Components: dfs
Reporter: dhruba borthakur
When memory usage is too high, the namenode could refuse creation of new files.
This would still crash applications, but it would keep the filesystem itself
from crashing in a way that is hard to recover while folks remove excessive
(presumably) small files.
http://java.sun.com/javase/6/docs/api/java/lang/management/MemoryPoolMXBean.html#UsageThreshold
Other resource limitations (other than memory) are CPU, network and disk. We
thought that we do not need to monitor those resources. The monitoring of
critical resources and the policy of what action to take can be outside the
actual Namenode process itself.
There are two reasons that cause memory pressure on the Namenode. One is the
creation of a large number of files. This reduces the free memory pool and the
GC has to work even harder to recycle memory. The other reason is when a burst
of RPCs arrive at the Namenode (especially Block reports). This spurt causes
free memory to reduce dramatically within a couple of seconds and makes GC work
harder. And we know that when GC runs hard, the server threads in the JVM
starve for CPU, causing timeouts on clients.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.