[
https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13461271#comment-13461271
]
Hudson commented on HADOOP-7256:
--------------------------------
Integrated in Hadoop-Mapreduce-trunk-Commit #2778 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2778/])
HADOOP-7256. Resource leak during failure scenario of closing of resources.
Contributed by Ramkrishna S. Vasudevan. (harsh) (Revision 1388893)
Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1388893
Files :
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java
> Resource leak during failure scenario of closing of resources.
> ---------------------------------------------------------------
>
> Key: HADOOP-7256
> URL: https://issues.apache.org/jira/browse/HADOOP-7256
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 0.20.2
> Reporter: ramkrishna.s.vasudevan
> Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-7256.patch, HADOOP-7256-patch-1.patch,
> HADOOP-7256-patch-2.patch
>
> Original Estimate: 8h
> Remaining Estimate: 8h
>
> Problem Statement:
> ===============
> There are chances of resource leak and stream not getting closed
> Take the case when after copying data we try to close the Input and output
> stream followed by closing of the socket.
> Suppose an exception occurs while closing the input stream(due to runtime
> exception) then the subsequent operations of closing the output stream and
> socket may not happen and there is a chance of resource leak.
> Scenario
> =======
> During long run of map reduce jobs, the copyFromLocalFile() api is getting
> called.
> Here we found some exceptions happening. As a result of this we found the
> lsof value raising leading to resource leak.
> Solution:
> =======
> While doing a close operation of any resource catch the RuntimeException also
> rather than catching the IOException alone.
> Additionally there are places where we try to close a resource in the catch
> block.
> If this close fails, we just throw and come out of the current flow.
> In order to avoid this, we can carry out the close operation in the finally
> block.
> Probable reasons for getting RunTimeExceptions:
> =====================================
> We may get runtime exception from customised hadoop streams like
> FSDataOutputStream.close() . So better to handle RunTimeExceptions also.
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira