[
https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12883848#action_12883848
]
jinglong.liujl commented on HDFS-1268:
--------------------------------------
Do you mean "BLOCK_INVALIDATE_CHUNK" ?
Currently, invalidBlocklimit is computed by max (BLOCK_INVALIDATE_CHUNK, 20 *
heartbeatInterval), If I want to modified this parameter, there're two choice.
1. set BLOCK_INVALIDATE_CHUNK a huge one, and re-compile code
2. increase heartbeat interval, but it can not carry more blocks totally.
what I want is made invalidBlocklimit can be configed by user.
If our cluster meet this "corner case", (in fact, it's not corner case, we use
hbase in this cluster, this case can be seen very often.), why not config this
parameter and restart cluster to prevent this issue?
> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
> Key: HDFS-1268
> URL: https://issues.apache.org/jira/browse/HDFS-1268
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Affects Versions: 0.22.0
> Reporter: jinglong.liujl
> Attachments: patch.diff
>
>
> If there're many file piled up in recentInvalidateSets, only
> Math.max(blockInvalidateLimit,
> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a
> heartbeat.(By default, It's 100). In high write stress, it'll cause process
> of invalidate blocks removing can not catch up with speed of writing.
> We extract blockInvalidateLimit to a sperate config parameter that user
> can make the right configure for your cluster.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.