[
https://issues.apache.org/jira/browse/HDFS-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13612097#comment-13612097
]
Suresh Srinivas commented on HDFS-4630:
---------------------------------------
Jiras are for reporting bugs and not a forum for asking questions. Please
discuss this in user mailing list on how to size the namenode and datanode
processes.
> Datanode is going OOM due to small files in hdfs
> ------------------------------------------------
>
> Key: HDFS-4630
> URL: https://issues.apache.org/jira/browse/HDFS-4630
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, namenode
> Affects Versions: 2.0.0-alpha
> Environment: Ubuntu, Java 1.6
> Reporter: Ankush Bhatiya
> Priority: Blocker
>
> Hi,
> We have very small files(size ranging 10KB-1MB) in our hdfs and no of files
> are in tens of millions. Due to this namenode and datanode both going out of
> memory very frequently. When we analyse the head dump of datanode most of the
> memory was used by ReplicaMap.
> Can we use EhCache or other to not to store all the data in memory?
> Thanks
> Ankush
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira