[
https://issues.apache.org/jira/browse/HDFS-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17470365#comment-17470365
]
范成波 commented on HDFS-16215:
----------------------------
hello,i also encountered this problem,and then i repeated it in the test
environment,The problem is that during the process of writing files,the HDFS
cluster restart,resulting in the file contract not being released, it can be
released within 1 hour by default,if i say something wrong,please correct it
out for discussion
> File read fails with CannotObtainBlockLengthException after Namenode is
> restarted
> ---------------------------------------------------------------------------------
>
> Key: HDFS-16215
> URL: https://issues.apache.org/jira/browse/HDFS-16215
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.2.2, 3.3.1
> Reporter: Srinivasu Majeti
> Priority: Minor
>
> When a file is being written by first client and fsck shows OPENFORWRITE and
> HDFS outage happens and brough back up , first client is disconnected and a
> new client tries to open the file we see "Cannot obtain block length for" as
> shown below.
> {code:java}
> /tmp/hosts7 134217728 bytes, replicated: replication=3, 1 block(s),
> OPENFORWRITE: OK
> 0. BP-1958960150-172.25.40.87-1628677864204:blk_1073745252_4430 len=134217728
> Live_repl=3
> [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK],
>
> DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK],
>
> DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]]
> Under Construction Block:
> 1. BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431 len=0
> Expected_repl=3
> [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK],
>
> DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK],
>
> DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]]
> [root@c1265-node2 ~]# hdfs dfs -get /tmp/hosts7
> get: Cannot obtain block length for
> LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431;
> getBlockSize()=0; corrupt=false; offset=134217728;
> locs=[DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK],
>
> DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK],
>
> DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]}
> *Exception trace from the logs:*
> Exception in thread "main"
> org.apache.hadoop.hdfs.CannotObtainBlockLengthException: Cannot obtain block
> length for
> LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073742720_1896;
> getBlockSize()=0; corrupt=false; offset=134217728;
> locs=[DatanodeInfoWithStorage[172.25.33.140:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK],
>
> DatanodeInfoWithStorage[172.25.40.87:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK],
>
> DatanodeInfoWithStorage[172.25.36.17:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]}
> at
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:363)
> at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
> at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:201)
> at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:185)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1006)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:312)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:324)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
> {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]