[ 
https://issues.apache.org/jira/browse/HADOOP-17597?focusedWorklogId=570826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570826
 ]

ASF GitHub Bot logged work on HADOOP-17597:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Mar/21 22:19
            Start Date: 23/Mar/21 22:19
    Worklog Time Spent: 10m 
      Work Description: hadoop-yetus commented on pull request #2801:
URL: https://github.com/apache/hadoop/pull/2801#issuecomment-805304298


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 16s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  42m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  34m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   6m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  32m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  32m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  28m 36s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   5m  0s |  |  root: The patch generated 
0 new + 13 unchanged - 2 fixed = 13 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 288m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2801 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 6006de908d1f 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9cbca3e4d955e0ab8c6349099468e96c828f8205 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/2/testReport/ |
   | Max. process+thread count | 2191 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2801/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 570826)
    Time Spent: 50m  (was: 40m)

> Add option to downgrade S3A rejection of Syncable to warning
> ------------------------------------------------------------
>
>                 Key: HADOOP-17597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17597
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> The Hadoop Filesystem Syncable API is intended to meet the requirements laid 
> out in [StoneBraker81] _Operating System Support for Database Management_
> bq.  The service required from an OS buffer manager is a selectedforce out 
> which would push the intentions list and the commit flag to disk in the 
> proper order. Such a service is not present in any buffer manager known to us.
> It's an expensive operation -so expensive that {{Syncable.hsync()}} isn't 
> even called on {{DFSOutputStream.close()}}. I
> Even though S3A does not manifest any data until close() is called, 
> applications coming from HDFS may call Syncable methods and expect to them to 
> persist data with the durability guarantees offered by HDFS.
> Since the output stream hardening of HADOOP-13327, S3A throws 
> UnsupportedOperationException to indicate that the synchronization semantics 
> of Syncable absolutely cannot be met. 
> As a result, applications which have been calling the Syncable APIs are 
> finding the call failing. In the absence of exception handling to recognise 
> that the durability semantics are being met, they fail.
> If the user and the application actually expects data to be persisted, this 
> is the correct behaviour. The data cannot be persisted this way.
> If, however, they were calling this on HDFS more as a {{flush()}} than the 
> full and expensive DBMS-class persistence call, then this failure is 
> unwelcome. The applications really needs to catch the 
> UnsupportedOperationException raised by S3A _or any other FS strictly 
> reporting failures_, report the problem and perform some other means of safe 
> data storage
> Even better, they can use hasPathCapability on the FS or hasCapability() on 
> the stream to probe before even opening a file or trying to sync it. the 
> hasCapability() on a stream was actually implemented in Hadooop-2.x precisely 
> to allow applications to identify when a stream could not meet the guarantees 
> (e.g some of the encrypted streams, file:// before HADOOP-13...)
> Until they can correct their code, I propose adding the option for s3a to 
> downgrade
> fs.s3a.downgrade.syncable.exceptions 
> This will
> * Log once per process at WARN
> * downgrade the calls to noop() 
> * increment counters in S3A stats and IO stats of invocations of the Syncable 
> methods. This will allow for stats gathering to let us identify which 
> applications need fixing in cloud deployments
> Testing: copy the hsync tests but expect exceptions to be swallowed and stats 
> to be collected
> Also: UnsupportedException text will link to this JIRA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to