[
https://issues.apache.org/jira/browse/HDFS-4530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13588985#comment-13588985
]
Colin Patrick McCabe commented on HDFS-4530:
--------------------------------------------
As far as I know, the only reason closing a file ever returns a bad error
status (or in Java, throws an exception) is because you have a file open for
write and the data in the page cache cannot be written to disk. Since
{{BlockReaderLocal}} only reads, there is no chance of this happening.
However, in order to be absolutely sure, we probably should use
{{IOUtils.cleanup}} or something like that to log the exception and move on.
> return buffer into direct bufferPool in BlockReaderLocal as possible
> --------------------------------------------------------------------
>
> Key: HDFS-4530
> URL: https://issues.apache.org/jira/browse/HDFS-4530
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 3.0.0
> Reporter: Liang Xie
> Assignee: Liang Xie
> Attachments: HDFS-4530.txt
>
>
> {code}
> public synchronized void close() throws IOException {
> dataIn.close();
> if (checksumIn != null) {
> checksumIn.close();
> }
> if (slowReadBuff != null) {
> bufferPool.returnBuffer(slowReadBuff);
> slowReadBuff = null;
> }
> if (checksumBuff != null) {
> bufferPool.returnBuffer(checksumBuff);
> checksumBuff = null;
> }
> startOffset = -1;
> checksum = null;
> }
> {code}
> If there's an IOException occurred in dataIn.close(), then the
> slowReadBuff&checksumBuff could not be returned anymore. let's make a
> trivial change to reduce this risk.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira