[
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691825#comment-17691825
]
Mukund Thakur commented on HADOOP-18637:
----------------------------------------
As discussed offline, following changes will be required.
* Introduce a new config to disable multipart upload everywhere and enable
just a large file upload.
* Error in public S3AFS.createMultipartUploader based on above config.
* Error in staging committer based on above config.
* Error in magic committer based on above config.
* Error in write operations helper based on above config.
* Add hasCapability(isMultiPartAllowed, path) use config.
* If multipart upload is disabled we only upload via Disk. Add check for this.
> S3A to support upload of files greater than 2 GB using DiskBlocks
> -----------------------------------------------------------------
>
> Key: HADOOP-18637
> URL: https://issues.apache.org/jira/browse/HADOOP-18637
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Harshit Gupta
> Assignee: Harshit Gupta
> Priority: Major
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using
> DiskBlocks. Currently, the max upload size of a single block is ~2GB.
> cc: [~mthakur] [[email protected]] [~mehakmeet]
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]