[
https://issues.apache.org/jira/browse/HDFS-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16897279#comment-16897279
]
Wei-Chiu Chuang commented on HDFS-14690:
----------------------------------------
this is not a bug. this is the expected behavior if somehow the NameNode is
slow. You may configure a few parameters to work around this problem. E.g.
client config dfs.client.block.write.locateFollowingBlock.retries. Default is
5. Setting it to 10 or 15 should help alleviate the problem greatly.
Additionally you should also check NameNode to understand why it was slow -- GC
pauses? heavy I/O? long running RPCs?
> java.io.IOException: Unable to close file because the last block does not
> have enough number of replicas.
> ---------------------------------------------------------------------------------------------------------
>
> Key: HDFS-14690
> URL: https://issues.apache.org/jira/browse/HDFS-14690
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.6.0
> Reporter: huihui
> Priority: Blocker
>
> java.io.IOException: Unable to close file because the last block does not
> have enough number of replicas.
> at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2298)
> at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2267)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
> at org.apache.avro.file.DataFileWriter.close(DataFileWriter.java:434)
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]