[ https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16563967#comment-16563967 ]
Ewan Higgs commented on HADOOP-15576: ------------------------------------- 004 - Add test for empty uploadID - Make the part size configurable by FS implementation as S3A requires 5MB part sizes (except the last part). - Use md5sum to compare the file contents so we don't need to compare multiple 5MB blocks together if they were a string. - Fix META-INF/service for S3AMultipartUploader$Factory. {quote} yes. If it's java serialization then it needs to be looked at to make sure it defends against malicios stuff. {quote} Would you prefer protobuf for this? It would be an extra generation step + new dependency. > S3A Multipart Uploader to work with S3Guard and encryption > ----------------------------------------------------------- > > Key: HADOOP-15576 > URL: https://issues.apache.org/jira/browse/HADOOP-15576 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.2 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HADOOP-15576.001.patch, HADOOP-15576.002.patch, > HADOOP-15576.003.patch, HADOOP-15576.004.patch > > > The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with > the tests to demonstrate this > # move from low-level calls of S3A client to calls of WriteOperationHelper; > adding any new methods are needed there. > # Tests. the tests of HDFS-13713. > # test execution, with -DS3Guard, -DAuth > There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and > even if there was, it might not show that S3Guard was bypassed, because > there's no checks that listFiles/listStatus shows the newly committed files. > Similarly, because MPU requests are initiated in S3AMultipartUploader, > encryption settings are't picked up. Files being uploaded this way *are not > being encrypted* -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org