[
https://issues.apache.org/jira/browse/HDFS-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18060968#comment-18060968
]
ASF GitHub Bot commented on HDFS-17886:
---------------------------------------
hadoop-yetus commented on PR #8277:
URL: https://github.com/apache/hadoop/pull/8277#issuecomment-3957605844
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 35s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 43m 59s | | trunk passed |
| +1 :green_heart: | compile | 1m 48s | | trunk passed with JDK
Ubuntu-21.0.10+7-Ubuntu-124.04 |
| +1 :green_heart: | compile | 1m 48s | | trunk passed with JDK
Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| +1 :green_heart: | checkstyle | 1m 50s | | trunk passed |
| +1 :green_heart: | mvnsite | 1m 55s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK
Ubuntu-21.0.10+7-Ubuntu-124.04 |
| +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK
Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| +1 :green_heart: | spotbugs | 4m 4s | | trunk passed |
| +1 :green_heart: | shadedclient | 31m 9s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 1m 21s | | the patch passed |
| +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK
Ubuntu-21.0.10+7-Ubuntu-124.04 |
| +1 :green_heart: | javac | 1m 15s | | the patch passed |
| +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK
Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| +1 :green_heart: | javac | 1m 17s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| -0 :warning: | checkstyle | 1m 13s |
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8277/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 13 unchanged -
0 fixed = 19 total (was 13) |
| +1 :green_heart: | mvnsite | 1m 28s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK
Ubuntu-21.0.10+7-Ubuntu-124.04 |
| +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK
Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| +1 :green_heart: | spotbugs | 3m 48s | | the patch passed |
| +1 :green_heart: | shadedclient | 30m 0s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 217m 29s | | hadoop-hdfs in the patch
passed. |
| -1 :x: | asflicense | 0m 49s |
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8277/1/artifact/out/results-asflicense.txt)
| The patch generated 1 ASF License warnings. |
| | | 349m 8s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.53 ServerAPI=1.53 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8277/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/8277 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux a3acc2c9fe43 5.15.0-164-generic #174-Ubuntu SMP Fri Nov 14
20:25:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 69962a69e7f4981d8aa1ccf47bdfe0795637a714 |
| Default Java | Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| Multi-JDK versions |
/usr/lib/jvm/java-21-openjdk-amd64:Ubuntu-21.0.10+7-Ubuntu-124.04
/usr/lib/jvm/java-17-openjdk-amd64:Ubuntu-17.0.18+8-Ubuntu-124.04.1 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8277/1/testReport/ |
| Max. process+thread count | 3847 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8277/1/console |
| versions | git=2.43.0 maven=3.9.11 spotbugs=4.9.7 |
| Powered by | Apache Yetus 0.14.1 https://yetus.apache.org |
This message was automatically generated.
> Fix namenode storageDirectory errors when doCheckpoint updateStorageVersion
> failed because of doCheckpoint thread interrupted when standby namenode ha
> failover to active
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-17886
> URL: https://issues.apache.org/jira/browse/HDFS-17886
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: liuguanghua
> Assignee: liuguanghua
> Priority: Major
>
> When namenode ha failover occurs, the standby namenode convert to active
> namenode,it will interrupt doCheckpoint thread. There is an extremely small
> probability that doCheckpoint updateStorageVersion() will throw
> java.nio.channels.ClosedByInterruptException. It will lead to the storage
> directory errors and remove from available list.
>
> The relevant error log is as follows:
> 2026-01-29 20:13:38,234 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Error during write properties to the VERSION file to Storage Directory root=
> /data/hadoop/hdfs/namenode; location= null
> java.nio.channels.ClosedByInterruptException
> at
> java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
> at
> java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:162)
> at
> java.base/sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:342)
> at
> org.apache.hadoop.hdfs.server.common.Storage.writeProperties(Storage.java:1284)
> at
> org.apache.hadoop.hdfs.server.common.Storage.writeProperties(Storage.java:1263)
> at
> org.apache.hadoop.hdfs.server.common.Storage.writeProperties(Storage.java:1254)
> at
> org.apache.hadoop.hdfs.server.namenode.NNStorage.writeAll(NNStorage.java:1169)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.updateStorageVersion(FSImage.java:1106)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1165)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:227)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1300(StandbyCheckpointer.java:64)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:480)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$600(StandbyCheckpointer.java:383)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:403)
> at
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:503)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:399)
> 2026-01-29 20:13:38,238 ERROR org.apache.hadoop.hdfs.server.common.Storage:
> Error reported on storage directory Storage Directory root=
> /data/hadoop/hdfs/namenode; location= null
> 2026-01-29 20:13:38,238 WARN org.apache.hadoop.hdfs.server.common.Storage:
> About to remove corresponding storage: /data/hadoop/hdfs/namenode
> 2026-01-29 20:13:38,245 ERROR
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Exception in
> doCheckpoint
> java.io.IOException: All the storage failed while writing properties to
> VERSION file
> at
> org.apache.hadoop.hdfs.server.namenode.NNStorage.writeAll(NNStorage.java:1175)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.updateStorageVersion(FSImage.java:1106)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1165)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:227)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1300(StandbyCheckpointer.java:64)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:480)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$600(StandbyCheckpointer.java:383)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:403)
> at
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:503)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:399)
>
> And java.nio.channels.ClosedByInterruptException is not a disk errors , so
> it should not remove from available storage list.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]