[ https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ewan Higgs updated HADOOP-15576: -------------------------------- Status: Patch Available (was: Open) Hi, [~ste...@apache.org] thanks for taking a look at the original patch. 001 - Extended the tests in {{AbstractSystemMultipartUploaderTest.java}} to check the existence of the files after writing (and non-existence of files when aborting). - Use {{WriteOperationHelper}} in S3A. Note that I use length of 0 since this is not passed into the complete or any other MPU bit. It's not required for S3; only the stats collection. [~ste...@apache.org], thoughts on what we want to do here for file length? > S3A Multipart Uploader to work with S3Guard and encryption > ----------------------------------------------------------- > > Key: HADOOP-15576 > URL: https://issues.apache.org/jira/browse/HADOOP-15576 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.2 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HADOOP-15576.001.patch > > > The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with > the tests to demonstrate this > # move from low-level calls of S3A client to calls of WriteOperationHelper; > adding any new methods are needed there. > # Tests. the tests of HDFS-13713. > # test execution, with -DS3Guard, -DAuth > There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and > even if there was, it might not show that S3Guard was bypassed, because > there's no checks that listFiles/listStatus shows the newly committed files. > Similarly, because MPU requests are initiated in S3AMultipartUploader, > encryption settings are't picked up. Files being uploaded this way *are not > being encrypted* -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org