[ 
https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6088:
-----------------------------------

    Target Version/s: 2.6.0  (was: 2.5.0)

> Add configurable maximum block count for datanode
> -------------------------------------------------
>
>                 Key: HDFS-6088
>                 URL: https://issues.apache.org/jira/browse/HDFS-6088
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>
> Currently datanode resources are protected by the free space check and the 
> balancer.  But datanodes can run out of memory simply storing too many 
> blocks. If the sizes of blocks are small, datanodes will appear to have 
> plenty of space to put more blocks.
> I propose adding a configurable max block count to datanode. Since datanodes 
> can have different heap configurations, it will make sense to make it 
> datanode-level, rather than something enforced by namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to