[
https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035447#comment-13035447
]
Hadoop QA commented on HADOOP-7256:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12479607/HADOOP-7256-patch-1.patch
against trunk revision 1104426.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 system test framework. The patch passed system test framework compile.
Test results:
https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/468//testReport/
Findbugs warnings:
https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/468//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output:
https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/468//console
This message is automatically generated.
> Resource leak during failure scenario of closing of resources.
> ---------------------------------------------------------------
>
> Key: HADOOP-7256
> URL: https://issues.apache.org/jira/browse/HADOOP-7256
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 0.20.2, 0.21.0
> Reporter: ramkrishna.s.vasudevan
> Priority: Minor
> Fix For: 0.23.0
>
> Attachments: HADOOP-7256-patch-1.patch
>
> Original Estimate: 8h
> Remaining Estimate: 8h
>
> Problem Statement:
> ===============
> There are chances of resource leak and stream not getting closed
> Take the case when after copying data we try to close the Input and output
> stream followed by closing of the socket.
> Suppose an exception occurs while closing the input stream(due to runtime
> exception) then the subsequent operations of closing the output stream and
> socket may not happen and there is a chance of resource leak.
> Scenario
> =======
> During long run of map reduce jobs, the copyFromLocalFile() api is getting
> called.
> Here we found some exceptions happening. As a result of this we found the
> lsof value raising leading to resource leak.
> Solution:
> =======
> While doing a close operation of any resource catch the RuntimeException also
> rather than catching the IOException alone.
> Additionally there are places where we try to close a resource in the catch
> block.
> If this close fails, we just throw and come out of the current flow.
> In order to avoid this, we can carry out the close operation in the finally
> block.
> Probable reasons for getting RunTimeExceptions:
> =====================================
> We may get runtime exception from customised hadoop streams like
> FSDataOutputStream.close() . So better to handle RunTimeExceptions also.
>
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira