[ 
https://issues.apache.org/jira/browse/HADOOP-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17734118#comment-17734118
 ] 

ASF GitHub Bot commented on HADOOP-18776:
-----------------------------------------

hadoop-yetus commented on PR #5758:
URL: https://github.com/apache/hadoop/pull/5758#issuecomment-1596958653

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  13m  0s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5758/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 2 unchanged - 0 fixed 
= 5 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5758/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
 with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 34s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5758/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5758 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 9fc4d7719454 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f0e7d04b3dab4733f9994f91226797867a95063c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5758/1/testReport/ |
   | Max. process+thread count | 559 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5758/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Add OptimizedS3AMagicCommitter For Zero Rename Commits to S3 Endpoints
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-18776
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18776
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs/s3
>            Reporter: Syed Shameerur Rahman
>            Priority: Major
>              Labels: pull-request-available
>
> The goal is to add a new S3A committer named *OptimizedS3AMagicCommitter* 
> which is an another type of S3 Magic committer but with a better performance 
> by taking in few tradeoffs.
> The following are the differences in MagicCommitter vs OptimizedMagicCommitter
>  
> ||Operation||Magic Committer||*OptimizedS3AMagicCommitter*||
> |commitTask    |1. Lists all {{.pending}} files in its attempt directory.
>  
> 2. The contents are loaded into a list of single pending uploads.
>  
> 3. Saved to a {{.pendingset}} file in the job attempt directory.|1. Lists all 
> {{.pending}} files in its attempt directory
>  
> 2. The contents are loaded into a list of single pending uploads.
>  
> 3. For each pending upload, commit operation is called (complete 
> multiPartUpload)|
> |commitJob|1. Loads all {{.pendingset}} files in its job attempt directory
>  
> 2. Then every pending commit in the job will be committed.
>  
> 3. "SUCCESS" marker is created (if config is enabled)
>  
> 4. "__magic" directory is cleaned up.|1. "SUCCESS" marker is created (if 
> config is enabled)
>  
> 2.  "__magic" directory is cleaned up.|
>  
> *Performance Benefits :-*
>  # The primary performance boost due to distributed complete multiPartUpload 
> call being made in the taskAttempts(Task containers/Executors) rather than a 
> single job driver. In case of MagicCommitter it is O(files/threads).
>  # It also saves a couple of S3 calls needed to PUT the "{{{}.pendingset{}}}" 
> files and READ call to read them in the Job Driver.
>  
> *TradeOffs :-*
> The tradeoffs are similar to the one in FileOutputCommitter V2 version. Users 
> migrating from FileOutputCommitter V2 to OptimizedS3AMagicCommitter will no 
> see behavioral change as such
>  # During execution, intermediate data becomes visible after commitTask 
> operation
>  # On a failure, all output must be deleted and the job needs to be restarted.
>  
> *Performance Benchmark :-*
> Cluster : c4.8x large (ec2-instance)
> Instance : 1 (primary) + 5 (core)
> Data Size : 3TB Partitioned(TPC-DS store_sales data)
> Engine     : Apache Spark 3.3.1
> Query: The following query inserts around 3000+ files into the table 
> directory (ran for 3 iterations)
> {code:java}
> insert into <table> select ss_quantity from store_sales; {code}
> ||Committer||Iteration 1||Iteration 2||Iteration 3||
> |Magic|126|127|122|
> |OptimizedMagic|50|51|58|
> So on an average, OptimizedMagicCommitter was *~2.3x* faster as compared to 
> MagicCommitter.
>  
> _*Note: Unlike MagicCommitter , OptimizedMagicCommitter is not suitable for 
> all the cases where in user requires the guarantees of file not being visible 
> in failure scenarios. Given the performance benefit, user can may choose to 
> use this if they don't require any guarantees or have some mechanism to clean 
> up the data before retrying.*_
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to