[
https://issues.apache.org/jira/browse/HADOOP-17928?focusedWorklogId=670060&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-670060
]
ASF GitHub Bot logged work on HADOOP-17928:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 26/Oct/21 13:08
Start Date: 26/Oct/21 13:08
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #3585:
URL: https://github.com/apache/hadoop/pull/3585#issuecomment-951921423
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 40s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 1s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 12m 46s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 21m 12s | | trunk passed |
| +1 :green_heart: | compile | 21m 31s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | compile | 18m 48s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | checkstyle | 3m 41s | | trunk passed |
| +1 :green_heart: | mvnsite | 2m 37s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 2m 33s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 3m 47s | | trunk passed |
| +1 :green_heart: | shadedclient | 20m 10s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 32s | | the patch passed |
| +1 :green_heart: | compile | 20m 44s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javac | 20m 44s | | the patch passed |
| +1 :green_heart: | compile | 18m 47s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | javac | 18m 47s | | the patch passed |
| -1 :x: | blanks | 0m 0s |
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3585/2/artifact/out/blanks-eol.txt)
| The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix
<<patch_file>>. Refer https://git-scm.com/docs/git-apply |
| +1 :green_heart: | checkstyle | 3m 38s | | the patch passed |
| +1 :green_heart: | mvnsite | 2m 35s | | the patch passed |
| +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML
file. |
| +1 :green_heart: | javadoc | 1m 48s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 2m 32s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 4m 8s | | the patch passed |
| +1 :green_heart: | shadedclient | 20m 40s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 17m 18s | | hadoop-common in the patch
passed. |
| +1 :green_heart: | unit | 2m 34s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 1m 0s | | The patch does not
generate ASF License warnings. |
| | | 210m 57s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3585/2/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/3585 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient codespell xml spotbugs checkstyle markdownlint |
| uname | Linux 5a910a4d8544 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / d252d4e8d6db6e9fd57b64366d269f879aae470b |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3585/2/testReport/ |
| Max. process+thread count | 1250 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws
U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3585/2/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 670060)
Time Spent: 1h (was: 50m)
> s3a: set fs.s3a.downgrade.syncable.exceptions = true by default
> ---------------------------------------------------------------
>
> Key: HADOOP-17928
> URL: https://issues.apache.org/jira/browse/HADOOP-17928
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.1
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> HADOOP-17597 set policy of reacting to hsync() on an s3 output stream to be
> one of :Fail, warn, with default == fail.
> I propose downgrading this to warn. We've done it internally, after having it
> on fail long enough to identify which processes were doing either of
> * having unrealistic expectations about the output stream (fix: move off s3)
> * were using hflush() as a variant of flush(), with the failure being an
> over-reaction
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]