[ https://issues.apache.org/jira/browse/HDFS-16316?focusedWorklogId=728013&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-728013 ]
ASF GitHub Bot logged work on HDFS-16316: ----------------------------------------- Author: ASF GitHub Bot Created on: 16/Feb/22 02:50 Start Date: 16/Feb/22 02:50 Worklog Time Spent: 10m Work Description: jianghuazhu commented on pull request #3861: URL: https://github.com/apache/hadoop/pull/3861#issuecomment-1041040826 Thanks for the suggestion, @jojochuang . I re-updated the unit tests and also did some tests. When I remove the fix, the newly added unit test does not succeed, which is expected and does not affect the execution of other unit tests. Here is an example of the test when removing the fix: ![image](https://user-images.githubusercontent.com/6416939/154185727-620eacac-5b4e-4b49-b6f2-1e612017cc35.png) Here is an example during normal testing: ![image](https://user-images.githubusercontent.com/6416939/154186788-f53338e6-2a40-46b1-95c9-59282fa7616b.png) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 728013) Time Spent: 3h 50m (was: 3h 40m) > Improve DirectoryScanner: add regular file check related block > -------------------------------------------------------------- > > Key: HDFS-16316 > URL: https://issues.apache.org/jira/browse/HDFS-16316 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.9.2 > Reporter: JiangHua Zhu > Assignee: JiangHua Zhu > Priority: Major > Labels: pull-request-available > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, > screenshot-4.png > > Time Spent: 3h 50m > Remaining Estimate: 0h > > Something unusual happened in the online environment. > The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is > normal for 10 disks to calculate the used capacity, and the calculated value > for the other 1 disk is much larger, which is very strange. > This is about the live view on the NameNode: > !screenshot-1.png! > This is about the live view on the DataNode: > !screenshot-2.png! > We can look at the view on linux: > !screenshot-3.png! > There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should > be prohibited from happening. > I found that there are some abnormal block files. > There are wrong blk_xxxx.meta in some subdir directories, causing abnormal > computing space. > Here are some abnormal block files: > !screenshot-4.png! > Such files should not be used as normal blocks. They should be actively > identified and filtered, which is good for cluster stability. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org