[
https://issues.apache.org/jira/browse/HDFS-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736787#comment-15736787
]
Hudson commented on HDFS-11229:
-------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10979 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/10979/])
HDFS-11229. HDFS-11056 failed to close meta file. Contributed by (weichiu: rev
2a28e8cf0469a373a99011f0fa540474e60528c8)
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
> HDFS-11056 failed to close meta file
> ------------------------------------
>
> Key: HDFS-11229
> URL: https://issues.apache.org/jira/browse/HDFS-11229
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.4, 3.0.0-alpha2
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Priority: Blocker
> Attachments: HDFS-11229.001.patch, HDFS-11229.branch-2.patch
>
>
> The following code failed to close the file after it is read.
> {code:title=FsVolumeImpl#loadLastPartialChunkChecksum}
> RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
> raf.seek(offsetInChecksum);
> raf.read(lastChecksum, 0, checksumSize);
> return lastChecksum;
> {code}
> This must be fixed because every append operation uses this piece of code.
> Without an explicit close, open files can reach system limit before
> RandomAccessFile objects are garbage collected.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]