[ 
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17709503#comment-17709503
 ] 

ASF GitHub Bot commented on HADOOP-18637:
-----------------------------------------

mukund-thakur commented on code in PR #5481:
URL: https://github.com/apache/hadoop/pull/5481#discussion_r1160269091


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1831,6 +1832,11 @@ private FSDataOutputStream innerCreateFile(
     final PutObjectOptions putOptions =
         new PutObjectOptions(keep, null, options.getHeaders());
 
+    if(!checkDiskBuffer(getConf())){

Review Comment:
   > just add a method validateOutputStreamConfiguration() and throw exception 
in the implementation only.
   
   This is still pending. I don't really mind leaving it as it is but I think 
my suggestion is consistent with other parts of the code and is more readable.
   CC @steveloughran 





> S3A to support upload of files greater than 2 GB using DiskBlocks
> -----------------------------------------------------------------
>
>                 Key: HADOOP-18637
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18637
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Harshit Gupta
>            Assignee: Harshit Gupta
>            Priority: Major
>              Labels: pull-request-available
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using 
> DiskBlocks. Currently, the max upload size of a single block is ~2GB. 
> cc: [~mthakur] [[email protected]] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to