[ 
https://issues.apache.org/jira/browse/HDFS-16318?focusedWorklogId=680719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-680719
 ]

ASF GitHub Bot logged work on HDFS-16318:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 12/Nov/21 08:43
            Start Date: 12/Nov/21 08:43
    Worklog Time Spent: 10m 
      Work Description: GuoPhilipse opened a new pull request #3649:
URL: https://github.com/apache/hadoop/pull/3649


   we may suffer `Could not obtain the last block location` exception, but we 
may reading more than one file, the following exception cannnot guide us to 
find the problem block or dn info. 
   
   `2021-11-12 14:01:59,633 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last 
block locations not available. Datanodes might not have reported blocks 
completely. Will retry for 3 times`
   `2021-11-12 14:02:03,724 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last 
block locations not available. Datanodes might not have reported blocks 
completely. Will retry for 2 times`
   `2021-11-12 14:02:07,726 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last 
block locations not available. Datanodes might not have reported blocks 
completely. Will retry for 1 times`
   
   
   `Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown 
Source)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:251)
        ... 11 more`
   `Caused by: java.io.IOException: Could not obtain the last block locations.
        at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:291)
        at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:264)
        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1535)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:312)
        at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
        at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:261)
        at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:463)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
        at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
        at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:66)
        ... 15 more`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 680719)
    Remaining Estimate: 0h
            Time Spent: 10m

> Add exception blockinfo
> -----------------------
>
>                 Key: HDFS-16318
>                 URL: https://issues.apache.org/jira/browse/HDFS-16318
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>    Affects Versions: 3.3.1
>            Reporter: guo
>            Priority: Minor
>          Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to