Vinayakumar B commented on HDFS-10987:

bq. To be precise, the number of blocks doesn't have to be huge. It will yield 
if the number is greater than the configured per-iteration-limit.
Yes, that's correct. But before-this patch, check against per-iteration-limit 
is done after checking all blocks-per-node. So yielding is done only after 
current-nodes list is complete.

bq. When the sleep is interrupted, it should probably not ignore. It looks like 
it can simply return.
Yes. Along with that, IMO should also add 'namesystem.isRunning()' to while 
loop condition in 'check()' to end execution fast.

> Make Decommission less expensive when lot of blocks present.
> ------------------------------------------------------------
>                 Key: HDFS-10987
>                 URL: https://issues.apache.org/jira/browse/HDFS-10987
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>         Attachments: HDFS-10987.patch
> When user want to decommission a node which having 50M blocks +,it could hold 
> the namesystem lock for long time.We've seen it is taking 36 sec+. 
> As we knew during this time, Namenode will not available... As this 
> decommission will continuosly run till all the blocks got replicated,hence 
> Namenode will unavailable.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to