[ 
https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12883277#action_12883277
 ] 

Konstantin Shvachko commented on HDFS-1268:
-------------------------------------------

I was actually in favor of introducing the parameter, see 
[here|https://issues.apache.org/jira/browse/HADOOP-774?focusedCommentId=12455413&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12455413]
So it is mostly about clear motivation, and making sure that the solution will 
actually work for you.
So you are talking about a corner case, when a DN is almost full and needs to 
remove blocks faster in order to free space for subsequent writes, right?
How does this parameter help on a running cluster? Configuration change takes 
effect only when you restart the name-node. Do you plan to restart cluster when 
you see data-nodes are getting close to full? 

> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
>                 Key: HDFS-1268
>                 URL: https://issues.apache.org/jira/browse/HDFS-1268
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: jinglong.liujl
>         Attachments: patch.diff
>
>
>       If there're many file piled up in recentInvalidateSets, only 
> Math.max(blockInvalidateLimit, 
> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a 
> heartbeat.(By default, It's 100). In high write stress, it'll cause process 
> of invalidate blocks removing can not catch up with  speed of writing. 
>     We extract blockInvalidateLimit  to a sperate config parameter that user 
> can make the right configure for your cluster. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to