[ 
https://issues.apache.org/jira/browse/HDFS-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13041098#comment-13041098
 ] 

ramkrishna.s.vasudevan commented on HDFS-1990:
----------------------------------------------

In BlockReceiver.java

In close() api

{noformat}
try
        {
          checksumOut.flush();
          if (datanode.syncOnClose && (cout instanceof FileOutputStream)) {
            ((FileOutputStream) cout).getChannel().force(true);
          }
         checksumOut.close();
       } catch (IOException e) {
          ioe = e;
        }
{noformat}
we are trying to close the streams in try block.
If some exception occurs we will not be closing the streams which may lead to 
resource leak.
Similar thing can happen in 

{noformat}
try{
  if (out != null) {
   out.flush();
  
  if (datanode.syncOnClose && (out instanceof FileOutputStream)) {
      ((FileOutputStream) out).getChannel().force(true);
        }
        out.close();
  } catch (IOException e) {
     ioe = e;
     }
{noformat}

> Resource leaks in HDFS
> ----------------------
>
>                 Key: HDFS-1990
>                 URL: https://issues.apache.org/jira/browse/HDFS-1990
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node, name-node
>    Affects Versions: 0.23.0
>            Reporter: ramkrishna.s.vasudevan
>            Priority: Minor
>             Fix For: 0.23.0
>
>
> Possible resource leakage in HDFS.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to