[
https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
jinglong.liujl updated HDFS-1268:
---------------------------------
Attachment: patch.diff
attach patch
> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
> Key: HDFS-1268
> URL: https://issues.apache.org/jira/browse/HDFS-1268
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Affects Versions: 0.22.0
> Reporter: jinglong.liujl
> Attachments: patch.diff
>
>
> If there're many file piled up in recentInvalidateSets, only
> Math.max(blockInvalidateLimit,
> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a
> heartbeat.(By default, It's 100). In high write stress, it'll cause process
> of invalidate blocks removing can not catch up with speed of writing.
> We extract blockInvalidateLimit to a sperate config parameter that user
> can make the right configure for your cluster.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.