[
https://issues.apache.org/jira/browse/HDFS-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18040013#comment-18040013
]
ASF GitHub Bot commented on HDFS-16318:
---------------------------------------
github-actions[bot] closed pull request #3649: HDFS-16318. Add exception
blockinfo
URL: https://github.com/apache/hadoop/pull/3649
> Add exception blockinfo
> -----------------------
>
> Key: HDFS-16318
> URL: https://issues.apache.org/jira/browse/HDFS-16318
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs
> Affects Versions: 3.3.1
> Reporter: guophilipse
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> we may suffer `Could not obtain the last block location` exception, but we
> may reading more than one file, the following exception cannnot guide us to
> find the problem block or dn info. we can add more info in the log to help
> us .
> `2021-11-12 14:01:59,633 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last
> block locations not available. Datanodes might not have reported blocks
> completely. Will retry for 3 times`
> `2021-11-12 14:02:03,724 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last
> block locations not available. Datanodes might not have reported blocks
> completely. Will retry for 2 times`
> `2021-11-12 14:02:07,726 WARN [main] org.apache.hadoop.hdfs.DFSClient: Last
> block locations not available. Datanodes might not have reported blocks
> completely. Will retry for 1 times`
> `Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown Source)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:251)
> ... 11 more`
> `Caused by: java.io.IOException: Could not obtain the last block locations.
> at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:291)
> at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:264)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1535)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:312)
> at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
> at
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:261)
> at
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:463)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
> at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
> at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:66)
> ... 15 more`
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]