[ 
https://issues.apache.org/jira/browse/HDFS-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13612114#comment-13612114
 ] 

Suresh Srinivas commented on HDFS-4630:
---------------------------------------

bq. this is not question, Its an issue Datanode going OOM due to incapability 
of holding all the pointers to all the blocks.
The datanode JVM needs to be sized correctly for holding the blocks. Datanodes 
consume memory proportional to the number of block replicas on a datanode. 
Hence I said, please get information on how to size the process.

                
> Datanode is going OOM due to small files in hdfs
> ------------------------------------------------
>
>                 Key: HDFS-4630
>                 URL: https://issues.apache.org/jira/browse/HDFS-4630
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, namenode
>    Affects Versions: 2.0.0-alpha
>         Environment: Ubuntu, Java 1.6
>            Reporter: Ankush Bhatiya
>            Priority: Blocker
>
> Hi, 
> We have very small files(size ranging 10KB-1MB) in our hdfs and no of files 
> are in tens of millions. Due to this namenode and datanode both going out of 
> memory very frequently. When we analyse the head dump of datanode most of the 
> memory was used by ReplicaMap. 
> Can we use EhCache or other to not to store all the data in memory? 
> Thanks
> Ankush

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to