[
https://issues.apache.org/jira/browse/HDFS-16910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17685420#comment-17685420
]
ASF GitHub Bot commented on HDFS-16910:
---------------------------------------
hadoop-yetus commented on PR #5359:
URL: https://github.com/apache/hadoop/pull/5359#issuecomment-1421261477
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 1m 4s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +0 :ok: | xmllint | 0m 1s | | xmllint was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 43m 10s | | trunk passed |
| +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 |
| +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK
Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| +1 :green_heart: | checkstyle | 1m 7s | | trunk passed |
| +1 :green_heart: | mvnsite | 1m 31s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 |
| +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK
Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| +1 :green_heart: | spotbugs | 3m 23s | | trunk passed |
| +1 :green_heart: | shadedclient | 26m 25s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 1m 28s | | the patch passed |
| +1 :green_heart: | compile | 1m 20s | | the patch passed with JDK
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 |
| +1 :green_heart: | javac | 1m 20s | | the patch passed |
| +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK
Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| +1 :green_heart: | javac | 1m 11s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 0m 52s | | the patch passed |
| +1 :green_heart: | mvnsite | 1m 22s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 |
| +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK
Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| +1 :green_heart: | spotbugs | 3m 24s | | the patch passed |
| +1 :green_heart: | shadedclient | 25m 34s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| -1 :x: | unit | 301m 24s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5359/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch passed. |
| +1 :green_heart: | asflicense | 0m 50s | | The patch does not
generate ASF License warnings. |
| | | 419m 40s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.42 ServerAPI=1.42 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5359/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5359 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
| uname | Linux 25d578b27c91 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 883e4f9f75601ee59d65d6c0512a69f404d0568f |
| Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5359/1/testReport/ |
| Max. process+thread count | 3215 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5359/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> Fix incorrectly initializing RandomAccessFile caused flush performance
> decreased for JN
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-16910
> URL: https://issues.apache.org/jira/browse/HDFS-16910
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Haiyang Hu
> Assignee: Haiyang Hu
> Priority: Major
> Labels: pull-request-available
>
> At present, after our cluster backport patch HDFS-15882,
> when set shouldSyncWritesAndSkipFsync to false, there will be flush
> performance degradation caused by JN.
> *Root Cause*:
> when setting shouldSyncWritesAndSkipFsync to false, the mode of init
> RandomAccessFile will be `rws`.
> even if fc.force(false) is executed when flushAndSync is executed (hopefully,
> only requires updates to the file's content to be written to storage and the
> metadata is not update),
> but since the mode of RandomAccessFile is `rws`, It will requires updates to
> both the file's content and its metadata to be written,
> there will be flush performance degradation caused by JN.
> *Fix:*
> Need to update RandomAccessFile's mode from `rws` to `rwd`:
> rwd: Open for reading and writing, as with "rw", and also require that every
> update to the file's content be written synchronously to the underlying
> storage device.
> {code:java}
> if (shouldSyncWritesAndSkipFsync) {
> rp = new RandomAccessFile(name, "rwd");
> } else {
> rp = new RandomAccessFile(name, "rw");
> }
> {code}
> In this way, when flushAndSync is executed,
> if shouldSyncWritesAndSkipFsync is false and the mode of RandomAccessFile is
> 'rw', it will call fc.force(false) to execute,
> otherwise should use `rwd` to perform the operation.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]