[
https://issues.apache.org/jira/browse/HADOOP-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12616726#action_12616726
]
Raghu Angadi commented on HADOOP-3592:
--------------------------------------
See HADOOP-2926 for more discussion about close()/closeStream() etc.
> org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the
> destination can't be opened
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3592
> URL: https://issues.apache.org/jira/browse/HADOOP-3592
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.19.0
> Reporter: Steve Loughran
> Assignee: Bill de hOra
> Priority: Minor
> Fix For: 0.19.0
>
> Attachments: HADOOP-3592.patch, HADOOP-3592.patch
>
>
> FileUtil.copy() relies on IOUtils.copyBytes() to close the incoming streams,
> which it does. Normally.
> But if dstFS.create() raises any kind of IOException, then the inputstream
> "in", which was created in the line above, will never get closed, and hence
> be leaked.
> InputStream in = srcFS.open(src);
> OutputStream out = dstFS.create(dst, overwrite);
> IOUtils.copyBytes(in, out, conf, true);
> Some try/catch wrapper around the open operations could close the streams if
> any exception gets thrown at that point in the copy process.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.