[
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17736530#comment-17736530
]
ASF GitHub Bot commented on HADOOP-18706:
-----------------------------------------
hadoop-yetus commented on PR #5771:
URL: https://github.com/apache/hadoop/pull/5771#issuecomment-1604454398
:confetti_ball: **+1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 52s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 4 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 39m 44s | | trunk passed |
| +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | checkstyle | 0m 34s | | trunk passed |
| +1 :green_heart: | mvnsite | 0m 39s | | trunk passed |
| +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | spotbugs | 1m 16s | | trunk passed |
| +1 :green_heart: | shadedclient | 24m 27s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 28s | | the patch passed |
| +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javac | 0m 33s | | the patch passed |
| +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | javac | 0m 25s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| -0 :warning: | checkstyle | 0m 20s |
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5771/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
| hadoop-tools/hadoop-aws: The patch generated 8 new + 5 unchanged - 0 fixed
= 13 total (was 5) |
| +1 :green_heart: | mvnsite | 0m 30s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | spotbugs | 1m 6s | | the patch passed |
| +1 :green_heart: | shadedclient | 24m 21s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 27s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 37s | | The patch does not
generate ASF License warnings. |
| | | 104m 3s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.43 ServerAPI=1.43 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5771/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5771 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux 1304a7468aeb 4.15.0-212-generic #223-Ubuntu SMP Tue May 23
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 95138e0dedfb8934695a766e5b72360a90b44517 |
| Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5771/1/testReport/ |
| Max. process+thread count | 608 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5771/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> Improve S3ABlockOutputStream recovery
> -------------------------------------
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Chris Bevard
> Assignee: Chris Bevard
> Priority: Minor
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If an application crashes during an S3ABlockOutputStream upload, it's
> possible to complete the upload if fast.upload.buffer is set to disk by
> uploading the s3ablock file with putObject as the final part of the multipart
> upload. If the application has multiple uploads running in parallel though
> and they're on the same part number when the application fails, then there is
> no way to determine which file belongs to which object, and recovery of
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every
> partial upload would be recoverable.
> h3. Important disclaimer
> This change does not directly add the Syncable semantics which applications
> that require {{Syncable.hsync()}} to only return after all pending data has
> been durably written to the destination path. S3 is not a filesystem and this
> change does not make it so.
> What is does do is assist anyone trying to implement some post-crash recovery
> process which
> # interrogates s3 to identofy pending uploads to a specific path and get a
> list of uploaded blocks yet to be committed
> # scans the local fs.s3a.buffer dir directories to identify in-progress-write
> blocks for the same target destination. That is those which were being
> uploaded, queued for uploaded and the single "new data being written to"
> block for an output stream
> # uploads all those pending blocks
> # generates a new POST to complete a multipart upload with all the blocks in
> the correct order
> All this patch does is ensure the buffered block filenames include the final
> path and block ID, to aid in identify which blocks need to be uploaded and
> what order.
> h2. warning
> causes HADOOP-18744 -always include the relevant fix when backporting
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]