[
https://issues.apache.org/jira/browse/HADOOP-17597?focusedWorklogId=587888&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587888
]
ASF GitHub Bot logged work on HADOOP-17597:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 23/Apr/21 14:57
Start Date: 23/Apr/21 14:57
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #2801:
URL: https://github.com/apache/hadoop/pull/2801#issuecomment-825716498
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 52s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 2 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 15m 13s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 22m 42s | | trunk passed |
| +1 :green_heart: | compile | 22m 38s | | trunk passed with JDK
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | compile | 19m 15s | | trunk passed with JDK
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| +1 :green_heart: | checkstyle | 3m 58s | | trunk passed |
| +1 :green_heart: | mvnsite | 2m 16s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javadoc | 2m 7s | | trunk passed with JDK
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| +1 :green_heart: | spotbugs | 3m 29s | | trunk passed |
| +1 :green_heart: | shadedclient | 16m 52s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 26s | | the patch passed |
| +1 :green_heart: | compile | 21m 43s | | the patch passed with JDK
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javac | 21m 43s | | the patch passed |
| +1 :green_heart: | compile | 19m 4s | | the patch passed with JDK
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| +1 :green_heart: | javac | 19m 4s | | the patch passed |
| -1 :x: | blanks | 0m 0s |
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/4/artifact/out/blanks-eol.txt)
| The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix
<<patch_file>>. Refer https://git-scm.com/docs/git-apply |
| +1 :green_heart: | checkstyle | 3m 56s | | root: The patch generated
0 new + 13 unchanged - 1 fixed = 13 total (was 14) |
| +1 :green_heart: | mvnsite | 2m 13s | | the patch passed |
| +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javadoc | 2m 7s | | the patch passed with JDK
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| +1 :green_heart: | spotbugs | 3m 52s | | the patch passed |
| +1 :green_heart: | shadedclient | 18m 5s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| -1 :x: | unit | 18m 12s |
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
| hadoop-common in the patch passed. |
| +1 :green_heart: | unit | 2m 9s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 48s | | The patch does not
generate ASF License warnings. |
| | | 208m 50s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/4/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/2801 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
| uname | Linux de220dfaefc7 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 9643ec071ac6738540794f727f90713bdee65b86 |
| Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/4/testReport/ |
| Max. process+thread count | 3134 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws
U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/4/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 587888)
Time Spent: 1h 40m (was: 1.5h)
> Add option to downgrade S3A rejection of Syncable to warning
> ------------------------------------------------------------
>
> Key: HADOOP-17597
> URL: https://issues.apache.org/jira/browse/HADOOP-17597
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 3.3.1
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> The Hadoop Filesystem Syncable API is intended to meet the requirements laid
> out in [StoneBraker81] _Operating System Support for Database Management_
> bq. The service required from an OS buffer manager is a selectedforce out
> which would push the intentions list and the commit flag to disk in the
> proper order. Such a service is not present in any buffer manager known to us.
> It's an expensive operation -so expensive that {{Syncable.hsync()}} isn't
> even called on {{DFSOutputStream.close()}}. I
> Even though S3A does not manifest any data until close() is called,
> applications coming from HDFS may call Syncable methods and expect to them to
> persist data with the durability guarantees offered by HDFS.
> Since the output stream hardening of HADOOP-13327, S3A throws
> UnsupportedOperationException to indicate that the synchronization semantics
> of Syncable absolutely cannot be met.
> As a result, applications which have been calling the Syncable APIs are
> finding the call failing. In the absence of exception handling to recognise
> that the durability semantics are being met, they fail.
> If the user and the application actually expects data to be persisted, this
> is the correct behaviour. The data cannot be persisted this way.
> If, however, they were calling this on HDFS more as a {{flush()}} than the
> full and expensive DBMS-class persistence call, then this failure is
> unwelcome. The applications really needs to catch the
> UnsupportedOperationException raised by S3A _or any other FS strictly
> reporting failures_, report the problem and perform some other means of safe
> data storage
> Even better, they can use hasPathCapability on the FS or hasCapability() on
> the stream to probe before even opening a file or trying to sync it. the
> hasCapability() on a stream was actually implemented in Hadooop-2.x precisely
> to allow applications to identify when a stream could not meet the guarantees
> (e.g some of the encrypted streams, file:// before HADOOP-13...)
> Until they can correct their code, I propose adding the option for s3a to
> downgrade
> fs.s3a.downgrade.syncable.exceptions
> This will
> * Log once per process at WARN
> * downgrade the calls to noop()
> * increment counters in S3A stats and IO stats of invocations of the Syncable
> methods. This will allow for stats gathering to let us identify which
> applications need fixing in cloud deployments
> Testing: copy the hsync tests but expect exceptions to be swallowed and stats
> to be collected
> Also: UnsupportedException text will link to this JIRA
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]