[
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691517#comment-17691517
]
Harshit Gupta commented on HADOOP-18637:
----------------------------------------
S3ABlockOutputStream creates three different types of DataBlock depending upon
the {{fs.s3a.fast.upload.buffer}} which defaults to disk, we can create an
empty file for the same size and limit the Buffer size to {{Integer.MAX_VALUE}}
. *For other buffer types should we deny uploads larger than 2 Gigs or should
we add the support there as well?* like for {{ByteArrayBlock}} which writes
directly to the {{S3AByteArrayOutputStream}} which will be again initialized
with {{Integer.MAX_Value}} .The same goes for {{ByteBufferBlock}} as well. One
thing to make sure of here is that it's never gonna write something larger than
{{Integer.MAX_VALUE}} as the calling function to write has the signature
{{public synchronized void write(byte[] source, int offset, int len)}}
(S3ABlockOutputStream).
*This is just for compatibility with non-AWS s3 stores.*
> S3A to support upload of files greater than 2 GB using DiskBlocks
> -----------------------------------------------------------------
>
> Key: HADOOP-18637
> URL: https://issues.apache.org/jira/browse/HADOOP-18637
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Harshit Gupta
> Assignee: Harshit Gupta
> Priority: Major
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using
> DiskBlocks. Currently, the max upload size of a single block is ~2GB.
> cc: [~mthakur] [[email protected]] [~mehakmeet]
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]