[ 
https://issues.apache.org/jira/browse/HDFS-16180?focusedWorklogId=639768&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-639768
 ]

ASF GitHub Bot logged work on HDFS-16180:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Aug/21 09:47
            Start Date: 19/Aug/21 09:47
    Worklog Time Spent: 10m 
      Work Description: Neilxzn opened a new pull request #3315:
URL: https://github.com/apache/hadoop/pull/3315


   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   FsVolumeImpl.nextBlock should consider that the block meta file has been 
deleted
   https://issues.apache.org/jira/browse/HDFS-16180
   
   ### How was this patch tested?
   no new test. 
   
   ### For code changes:
   FsVolumeImpl.nextBlock hanlde FileNotFoundException
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 639768)
    Remaining Estimate: 0h
            Time Spent: 10m

> FsVolumeImpl.nextBlock should consider that the block meta file has been 
> deleted.
> ---------------------------------------------------------------------------------
>
>                 Key: HDFS-16180
>                 URL: https://issues.apache.org/jira/browse/HDFS-16180
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 3.3.0, 3.4.0
>            Reporter: Max  Xie
>            Priority: Minor
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In my cluster,  we found that when VolumeScanner run, sometime dn will throw 
> some error log below
> ```
>  
> 2021-08-19 08:00:11,549 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1020175758-nnip-1597745872895 blk_1142977964_69237147 URI 
> file:/disk1/dfs/data/current/BP-1020175758- 
> nnip-1597745872895/current/finalized/subdir0/subdir21/blk_1142977964
> 2021-08-19 08:00:48,368 ERROR 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl: 
> nextBlock(DS-060c8e4c-1ef6-49f5-91ef-91957356891a, BP-1020175758- 
> nnip-1597745872895): I/O error
> java.io.IOException: Meta file not found, 
> blockFile=/disk1/dfs/data/current/BP-1020175758- 
> nnip-1597745872895/current/finalized/subdir0/subdir21/blk_1142977964
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetUtil.findMetaFile(FsDatasetUtil.java:101)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl.nextBlock(FsVolumeImpl.java:809)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:528)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:628)
> 2021-08-19 08:00:48,368 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: 
> VolumeScanner(/disk1/dfs/data, DS-060c8e4c-1ef6-49f5-91ef-91957356891a): 
> nextBlock error on 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl@7febc6b4
> ```
> When VolumeScanner scan block  blk_1142977964,  it has been deleted by 
> datanode,  scanner can not find the meta file of blk_1142977964, so it throw 
> these error log.
>  
> Maybe we should handle FileNotFoundException during nextblock to reduce error 
> log and nextblock retry times.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to