[ 
https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-16900:
----------------------------------
    Description: If a written file size exceeds 10,000 * 
{{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as 
the maximum number of parts in a multipart upload is 10,000 as specific by the 
S3 API and there is an apparent bug where this failure is not fatal, and the 
multipart upload is allowed to be marked as completed.  (was: If a written file 
size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 
object will occur as the maximum number of parts in a multipart upload is 
10,000 as specific by the S3 API and there is an apparent bug where this 
failure is not fatal.)

> Very large files can be truncated when written through S3AFileSystem
> --------------------------------------------------------------------
>
>                 Key: HADOOP-16900
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16900
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>            Reporter: Andrew Olson
>            Priority: Major
>              Labels: s3
>
> If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt 
> truncation of the S3 object will occur as the maximum number of parts in a 
> multipart upload is 10,000 as specific by the S3 API and there is an apparent 
> bug where this failure is not fatal, and the multipart upload is allowed to 
> be marked as completed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to