[jira] [Updated] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16900: - Fix Version/s: 3.3.1 > Very large files can be truncated when written through S3AFileSystem > > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Andrew Olson >Assignee: Mukund Thakur >Priority: Major > Labels: s3 > Fix For: 3.3.1, 3.4.0 > > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as > [specified|https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html] by > the S3 API, and there is an apparent bug where this failure is not fatal > allowing the multipart upload operation to be marked as successfully > completed without being fully complete. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-16900: -- Description: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as [specified|https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html] by the S3 API, and there is an apparent bug where this failure is not fatal allowing the multipart upload operation to be marked as successfully completed without being fully complete. (was: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal, and the multipart upload is allowed to be marked as completed.) > Very large files can be truncated when written through S3AFileSystem > > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Andrew Olson >Assignee: Mukund Thakur >Priority: Major > Labels: s3 > Fix For: 3.4.0 > > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as > [specified|https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html] by > the S3 API, and there is an apparent bug where this failure is not fatal > allowing the multipart upload operation to be marked as successfully > completed without being fully complete. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16900: Affects Version/s: 3.2.1 > Very large files can be truncated when written through S3AFileSystem > > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Andrew Olson >Priority: Major > Labels: s3 > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as specific by the S3 API and there is an apparent > bug where this failure is not fatal, and the multipart upload is allowed to > be marked as completed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-16900: -- Description: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal, and the multipart upload is allowed to be marked as completed. (was: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal.) > Very large files can be truncated when written through S3AFileSystem > > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Andrew Olson >Priority: Major > Labels: s3 > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as specific by the S3 API and there is an apparent > bug where this failure is not fatal, and the multipart upload is allowed to > be marked as completed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org