[ 
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17708367#comment-17708367
 ] 

ASF GitHub Bot commented on HADOOP-18637:
-----------------------------------------

steveloughran commented on code in PR #5481:
URL: https://github.com/apache/hadoop/pull/5481#discussion_r1157069838


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:
##########
@@ -1031,13 +1031,22 @@ public static long 
getMultipartSizeProperty(Configuration conf,
     return partSize;
   }
 
-  public static boolean checkDiskBuffer(Configuration conf){
+  /**
+   * Check whether the configuration for S3ABlockOutputStream is
+   * consistent or not. Multipart uploads allow all kinds of fast buffers to
+   * be supported. When the option is disabled only disk buffers are allowed to
+   * be used as the file size might be bigger than the buffer size that can be
+   * allocated.
+   * @param conf
+   * @return

Review Comment:
   nit: document conf argument and retrun value



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##########
@@ -1256,7 +1256,16 @@ private Constants() {
   public static final String PREFETCH_BLOCK_COUNT_KEY = 
"fs.s3a.prefetch.block.count";
   public static final int PREFETCH_BLOCK_DEFAULT_COUNT = 8;
 
+  /**
+   * Option to enable or disable the multipart uploads.

Review Comment:
   nit: still needs an {@value} element



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFileUploadSinglePut.java:
##########
@@ -42,14 +51,23 @@ protected Configuration createScaleConfiguration() {
     configuration.setLong(MULTIPART_SIZE, 53687091200L);
     configuration.setInt(KEY_TEST_TIMEOUT, 36000);
     configuration.setInt(IO_CHUNK_BUFFER_SIZE, 655360);
-    configuration.set("fs.s3a.connection.request.timeout", "1h");
+    configuration.set(REQUEST_TIMEOUT, "1h");
+    fileSize = getTestPropertyBytes(configuration, KEY_HUGE_FILESIZE,
+        DEFAULT_HUGE_FILESIZE);
     return configuration;
   }
 
   @Test
   public void uploadFileSinglePut() throws IOException {
     LOG.info("Creating file with size : {}", fileSize);
-    ContractTestUtils.createAndVerifyFile(getFileSystem(),
-        getTestPath(), fileSize );
+    S3AFileSystem fs = getFileSystem();
+    ContractTestUtils.createAndVerifyFile(fs,
+        getTestPath(), fileSize);
+    //No more than three put requests should be made during the upload of the 
file
+    //First one being the creation of test/ directory marker
+    //Second being the creation of the file with tests3ascale/<file-name>
+    //Third being the creation of directory marker tests3ascale/ on the file 
delete
+    assertEquals(3L,

Review Comment:
   use `IOStatisticAssertions` here; it generates AssertJ assertion chains from 
lookups with automatic generation of error text.
   
   ```java
   assertThatStatisticCounter(fs.getIOStatistics(), 
OBJECT_PUT_REQUESTS.getSymbol())
     .isEqualTo(3);
   ```
   



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1859,7 +1859,7 @@ private FSDataOutputStream innerCreateFile(
         .withPutOptions(putOptions)
         .withIOStatisticsAggregator(
             
IOStatisticsContext.getCurrentIOStatisticsContext().getAggregator())
-            .withMultipartAllowed(getConf().getBoolean(
+            .withMultipartEnabled(getConf().getBoolean(

Review Comment:
   i think the multipart enabled flag should be made a field and stored during 
initialize(), so we can save on scanning the conf map every time a file is 
created.





> S3A to support upload of files greater than 2 GB using DiskBlocks
> -----------------------------------------------------------------
>
>                 Key: HADOOP-18637
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18637
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Harshit Gupta
>            Assignee: Harshit Gupta
>            Priority: Major
>              Labels: pull-request-available
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using 
> DiskBlocks. Currently, the max upload size of a single block is ~2GB. 
> cc: [~mthakur] [[email protected]] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to