[
https://issues.apache.org/jira/browse/HDFS-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang updated HDFS-11229:
-----------------------------------
Release Note: The fix for HDFS-111056 reads meta file to load last partial
chunk checksum when a block is converted from finalized/temporary to rbw.
However, it did not close the file explicitly, which may cause number of open
files reaching system limit. This jira fixes it by closing the file explicitly
after the meta file is read. (was: The fix for HDFS-111056 reads meta file to
load last partial chunk checksum when a block is converted from
finalized/temporary to rbw. However, it did not close the file explicitly,
which may cause number of open files reaching system limit. This jira fixes
this by closing the file explicitly after the meta file is read.)
> HDFS-11056 failed to close meta file
> ------------------------------------
>
> Key: HDFS-11229
> URL: https://issues.apache.org/jira/browse/HDFS-11229
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.4, 3.0.0-alpha2
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Priority: Blocker
> Attachments: HDFS-11229.001.patch, HDFS-11229.branch-2.patch
>
>
> The following code failed to close the file after it is read.
> {code:title=FsVolumeImpl#loadLastPartialChunkChecksum}
> RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
> raf.seek(offsetInChecksum);
> raf.read(lastChecksum, 0, checksumSize);
> return lastChecksum;
> {code}
> This must be fixed because every append operation uses this piece of code.
> Without an explicit close, open files can reach system limit before
> RandomAccessFile objects are garbage collected.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]