[ 
https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17712302#comment-17712302
 ] 

ASF GitHub Bot commented on HADOOP-18695:
-----------------------------------------

dannycjones commented on code in PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#discussion_r1166577565


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java:
##########
@@ -192,13 +201,7 @@ public void test_010_CreateHugeFile() throws IOException {
         true,
         uploadBlockSize,
         progress)) {
-      try {
-        streamStatistics = getOutputStreamStatistics(out);
-      } catch (ClassCastException e) {
-        LOG.info("Wrapped output stream is not block stream: {}",
-            out.getWrappedStream());
-        streamStatistics = null;
-      }
+      streamStatistics = getOutputStreamStatistics(out);

Review Comment:
   why do we drop this? is it that it wasn't right in the first place, so we 
should fail if we hit it?





> S3A: reject multipart copy requests when disabled
> -------------------------------------------------
>
>                 Key: HADOOP-18695
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18695
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: pull-request-available
>
> follow-on to HADOOP-18637 and support for huge file uploads with stores which 
> don't support MPU.
> * prevent use of API against any s3 store when disabled, using logging 
> auditor to reject it
> * tests to verify rename of huge files still works (by setting large part 
> size)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to