[
https://issues.apache.org/jira/browse/HDFS-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277056#comment-14277056
]
Hudson commented on HDFS-7570:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #2024 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2024/])
HDFS-7570. SecondaryNameNode need twice memory when calling
reloadFromImageFile. Contributed by zhaoyunjiong. (cnauroth: rev
85aec75ce53445e1abf840076d2e10f1e3c6d69b)
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> DataXceiver could leak FileDescriptor
> -------------------------------------
>
> Key: HDFS-7570
> URL: https://issues.apache.org/jira/browse/HDFS-7570
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Juan Yu
>
> DataXceiver doesn't close inputstream all the time, There could be FD leakage
> and overtime cause FDs exceed limit.
> {code}
> finally {
> if (LOG.isDebugEnabled()) {
> LOG.debug(datanode.getDisplayName() + ":Number of active connections
> is: "
> + datanode.getXceiverCount());
> }
> updateCurrentThreadName("Cleaning up");
> if (peer != null) {
> dataXceiverServer.closePeer(peer);
> IOUtils.closeStream(in);
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)