[ 
https://issues.apache.org/jira/browse/HDFS-16316?focusedWorklogId=703854&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703854
 ]

ASF GitHub Bot logged work on HDFS-16316:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Jan/22 10:22
            Start Date: 05/Jan/22 10:22
    Worklog Time Spent: 10m 
      Work Description: jianghuazhu opened a new pull request #3861:
URL: https://github.com/apache/hadoop/pull/3861


   
   
   ### Description of PR
   When blk_xxxx and blk_xxxx.meta are not regular files, they will have 
adverse effects on the cluster, such as errors in the calculation space and the 
possibility of failure to read data.
   For this type of block, it should not be used as a normal block file.
   Details:
   HDFS-16316
   
   
   ### How was this patch tested?
   Need to verify whether a file is a real regular file.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 703854)
    Remaining Estimate: 0h
            Time Spent: 10m

> Improve DirectoryScanner: add regular file check related block
> --------------------------------------------------------------
>
>                 Key: HDFS-16316
>                 URL: https://issues.apache.org/jira/browse/HDFS-16316
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.9.2
>            Reporter: JiangHua Zhu
>            Assignee: JiangHua Zhu
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
> normal for 10 disks to calculate the used capacity, and the calculated value 
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
>  !screenshot-1.png! 
> This is about the live view on the DataNode:
>  !screenshot-2.png! 
> We can look at the view on linux:
>  !screenshot-3.png! 
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should 
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_xxxx.meta in some subdir directories, causing abnormal 
> computing space.
> Here are some abnormal block files:
>  !screenshot-4.png! 
> Such files should not be used as normal blocks. They should be actively 
> identified and filtered, which is good for cluster stability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to