[ 
https://issues.apache.org/jira/browse/HADOOP-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278448#comment-14278448
 ] 

Thomas Demoor commented on HADOOP-11463:
----------------------------------------

[~ste...@apache.org] shutDown terminates immediately. In-process transfers are 
abandoned. For regular uploads, S3 simply discards the object. However, for 
multi-part uploads, parts that were completely uploaded are stored (and paid 
for) forever and the multi-part upload is "in progress". The purging 
functionality alleviates this. If fs.s3a.multipart.purge == true, the 
constructor of S3AFileSystem aborts all in process multi-part uploads older 
than fs.s3a.multipart.purge.age seconds (this protects "active" multi-part 
uploads). 

[~tedyu] Yeah, that was to be expected. See comment 2 in the description above. 

> Replace method-local TransferManager object with S3AFileSystem#transfers
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-11463
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11463
>             Project: Hadoop Common
>          Issue Type: Task
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>         Attachments: hadoop-11463-001.patch
>
>
> This is continuation of HADOOP-11446.
> The following changes are made according to Thomas Demoor's comments:
> 1. Replace method-local TransferManager object with S3AFileSystem#transfers
> 2. Do not shutdown TransferManager after purging existing multipart file - 
> otherwise the current transfer is unable to proceed
> 3. Shutdown TransferManager instance in the close method of S3AFileSystem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to