[
https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17712378#comment-17712378
]
ASF GitHub Bot commented on HADOOP-18695:
-----------------------------------------
steveloughran commented on code in PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#discussion_r1166803520
##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java:
##########
@@ -119,6 +119,14 @@ protected Configuration createScaleConfiguration() {
DEFAULT_HUGE_PARTITION_SIZE);
assertTrue("Partition size too small: " + partitionSize,
partitionSize >= MULTIPART_MIN_SIZE);
+ removeBaseAndBucketOverrides(conf,
+ SOCKET_SEND_BUFFER,
+ SOCKET_RECV_BUFFER,
+ MIN_MULTIPART_THRESHOLD,
+ MULTIPART_SIZE,
+ USER_AGENT_PREFIX,
+ FAST_UPLOAD_BUFFER);
Review Comment:
oh that's cute. Be hard to get right though, especially as there are other
troublespots related to default option loading (if anyone creates an
HdfsConfiguration it forces hdfs-default.xml onto the classpath which reloads
things and breaks tests)
> S3A: reject multipart copy requests when disabled
> -------------------------------------------------
>
> Key: HADOOP-18695
> URL: https://issues.apache.org/jira/browse/HADOOP-18695
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
> Labels: pull-request-available
>
> follow-on to HADOOP-18637 and support for huge file uploads with stores which
> don't support MPU.
> * prevent use of API against any s3 store when disabled, using logging
> auditor to reject it
> * tests to verify rename of huge files still works (by setting large part
> size)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]