[ 
https://issues.apache.org/jira/browse/HDFS-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17552722#comment-17552722
 ] 

Hiroyuki Adachi edited comment on HDFS-16613 at 6/10/22 2:28 PM:
-----------------------------------------------------------------

[~caozhiqiang] , thank you for explaining in detail.

I think the data process you described is correct and your approach for 
improving performance is right. My concern was the reconstruction load on a 
large cluster where blocksToProcess is much larger than maxTransfers. But I 
found that I had misunderstood that the blocks held by the busy node would be 
reconstructed rather than replicated.

So I think there is no problem using 
dfs.namenode.replication.max-streams-hard-limit for this purpose. Basically, it 
is for the highest priority replication, but the patch will not affect that 
since computeReconstructionWorkForBlocks() processes higher priority 
replication queue first, the decommissioning node will not fill up by low 
redundancy EC block replication tasks.


was (Author: hadachi):
[~caozhiqiang] , thank you for explaining in detail.

I think the data process you described is correct and your approach for 
improving performance is right. My concern was the reconstruction load on a 
large cluster where blocksToProcess is much larger than maxTransfers. But I 
found that I had misunderstood that the blocks held by the busy node would be 
reconstructed rather than replicated. So I think there is no problem using 
dfs.namenode.replication.max-streams-hard-limit for this purpose.

> EC: Improve performance of decommissioning dn with many ec blocks
> -----------------------------------------------------------------
>
>                 Key: HDFS-16613
>                 URL: https://issues.apache.org/jira/browse/HDFS-16613
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: ec, erasure-coding, namenode
>    Affects Versions: 3.4.0
>            Reporter: caozhiqiang
>            Assignee: caozhiqiang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2022-06-07-11-46-42-389.png, 
> image-2022-06-07-17-42-16-075.png, image-2022-06-07-17-45-45-316.png, 
> image-2022-06-07-17-51-04-876.png, image-2022-06-07-17-55-40-203.png, 
> image-2022-06-08-11-38-29-664.png, image-2022-06-08-11-41-11-127.png
>
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In a hdfs cluster with a lot of EC blocks, decommission a dn is very slow. 
> The reason is unlike replication blocks can be replicated from any dn which 
> has the same block replication, the ec block have to be replicated from the 
> decommissioning dn.
> The configurations dfs.namenode.replication.max-streams and 
> dfs.namenode.replication.max-streams-hard-limit will limit the replication 
> speed, but increase these configurations will create risk to the whole 
> cluster's network. So it should add a new configuration to limit the 
> decommissioning dn, distinguished from the cluster wide max-streams limit.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to