[
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293307#comment-14293307
]
Hadoop QA commented on HADOOP-10309:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12625989/HADOOP-10309.patch
against trunk revision 6f9fe76.
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/5505//console
This message is automatically generated.
> S3 block filesystem should more aggressively delete temporary files
> -------------------------------------------------------------------
>
> Key: HADOOP-10309
> URL: https://issues.apache.org/jira/browse/HADOOP-10309
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Affects Versions: 2.6.0
> Reporter: Joe Kelley
> Priority: Minor
> Attachments: HADOOP-10309.patch
>
>
> The S3 FileSystem reading implementation downloads block files into a
> configurable temporary directory. deleteOnExit() is called on these files, so
> they are deleted when the JVM exits.
> However, JVM reuse can lead to JVMs that stick around for a very long time.
> This can cause these temporary files to build up indefinitely and, in the
> worst case, fill up the local directory.
> After a block file has been read, there is no reason to keep it around. It
> should be deleted.
> Writing to the S3 FileSystem already has this behavior; after a temporary
> block file is written and uploaded to S3, it is deleted immediately; there is
> no need to wait for the JVM to exit.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)