[
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Chen resolved HADOOP-13264.
--------------------------------
Resolution: Duplicate
I'm closing this as a dup of HDFS-10549, since [~linyiqun] is working on there
and the change is in HDFS.
Thanks [~sebyonthenet] and all for the work here, let's follow up on HDFS-10549.
> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in
> case no hdfs datanodes are accessible
> ----------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.2
> Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes
> can't be accessed, then this causes a memory leak as the
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]