[
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-13560:
------------------------------------
Attachment: HADOOP-13560-branch-2-004.patch
Patch 004
* fixed name of fs.s3a.block.output option in core-default and docs. Thanks
Rajesh!
* more attempts at managing close() operation rigorously. No evidence this is
the cause of the problem rajesh saw though.
* rearranged layout of code in S3ADatablocks so associated classes are
adjacent;
* retry on multipart commit adding sleep statements between retries
* gauges of active block uploads wired up.
* more debug statements
* new Progress log for logging progress @ debug level in s3a. Why? Because
logging events every 8KB gets too chatty when debugging many-MB uploads.
> S3ABlockOutputStream to support huge (many GB) file writes
> ----------------------------------------------------------
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch,
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch,
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very
> large commit operations for committers using rename
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]