[ 
https://issues.apache.org/jira/browse/HDFS-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17776528#comment-17776528
 ] 

ASF GitHub Bot commented on HDFS-17218:
---------------------------------------

haiyang1987 commented on PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#issuecomment-1767833407

   > > 
   > Back to this PR.
   > 
   > > IMO, adding a timeout mechanism may not add much pressure on NameNode. 
However, it seems that the implementation of that solution is more complex than 
the current patch and requires more comprehensive design and consideration. The 
good aspect is that the timeout mechanism can completely solve the problem of 
excess replica leakage, after all, the situation where datanodes fail to 
successfully delete replicas according to commands may not be limited to the 
scenario described in this JIRA.
   > 
   > I totally support the solution @zhangshuyan0 mentioned here. 
https://issues.apache.org/jira/browse/HDFS-17218?focusedCommentId=17774766&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17774766
   
   Thanks @Hexiaoqiao  for your comment.
   Indeed this timeout mechanism solution can completely solve the problem of 
redundant copy leakage, however the implementation cost of this solution maybe 
relatively high. for a large cluster, such as balance and reduce replication 
etc occur frequently, and ExcessRedundancyMap stores a lot of information.
   During each iteration, DataNodes and their corresponding blocks are fully 
traversed for processing. this approach might potentially increase the load on 
holding the NameNode write lock, sure we can make good designs to avoid holding 
the write lock time as much as possible.
   
   The current pr to slove problem of ExcessRedundancyMap leakage on a 
case-by-case basis, and  the implementation cost is relatively low.
   I think if there will be problems like ExcessRedundancyMap leakage in the 
future again, we should probably solve it case by case and find the root cause 
of the leakage to solve it.
   
   Of course, if we decide to adopt the timeout mechanism solution, I will 
submit a new PR. look forward to your feedback. Thanks. 
   




> NameNode should remove its excess blocks from the ExcessRedundancyMap When a 
> DN registers
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-17218
>                 URL: https://issues.apache.org/jira/browse/HDFS-17218
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namanode
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2023-10-12-15-52-52-336.png
>
>
> Currently found that DN will lose all pending DNA_INVALIDATE blocks if it 
> restarts.
> *Root case*
> Current DN enables asynchronously deletion, it have many pending deletion 
> blocks in memory.
> when DN restarts, these cached blocks may be lost. it causes some blocks in 
> the excess map in the namenode to be leaked and this will result in many 
> blocks having more replicas then expected.
> *solution*
> Consider NameNode should remove its excess blocks from the 
> ExcessRedundancyMap When a DN registers,
> this approach will ensure that when processing the DN's full block report, 
> the 'processExtraRedundancy' can be performed according to the actual of the 
> blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to