[ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seb Mo updated HADOOP-13264:
----------------------------
    Comment: was deleted

(was: Thanks [~kihwal]. 

The DFSOutpuStream#close()->closeImlp()->flushInternal() ->checkClosed() call 
still throws the lastException.get(), so going back on the stack to the 
DFSOutputStream#close, the dfsClient.endFileLease(fileId) still does not get 
called due to the thrown exception in the checkClosed method.

Just to make sure, I've syned the 2.7 branch and built the latest 2.7.3 on my 
box and re-running my test still shows the problem being present, then 
filesBeingWritten still keeps a reference to the stream that was not closed. )

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-13264
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13264
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.7.2
>            Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to