KevinWikant commented on PR #7179: URL: https://github.com/apache/hadoop/pull/7179#issuecomment-2561177582
## Javadoc Failure I have fixed the last javadoc warning: ``` [ERROR] /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-7179/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:1775: error: bad HTML entity [ERROR] * reduction in HDFS write failures & HDFS data loss. ``` I am probably missing something here, but when I previously ran javadoc on my local I did not see this error: ``` > pwd .../hadoop-hdfs-project/hadoop-hdfs > mvn javadoc:javadoc ... [INFO] --- javadoc:3.0.1:javadoc (default-cli) @ hadoop-hdfs --- [INFO] ExcludePrivateAnnotationsStandardDoclet 1 warning [WARNING] Javadoc Warnings [WARNING] .../hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java:386: warning - Tag @link: can't find checkPermissionWithContext(AuthorizationContext) in org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 22.182 s [INFO] Finished at: 2024-12-21T17:05:06-05:00 [INFO] ------------------------------------------------------------------------ ``` ## Unit Test Failure - testDecommissionWithUCBlocksFeatureDisabledAndDefaultMonitor I ran the following command 100 times in a loop: ``` mvn -Dtest=TestDecommission#testDecommissionWithUCBlocksFeatureDisabledAndDefaultMonitor test -o ``` I then ran the following command 50 times in a loop: ``` mvn -Dtest=TestDecommission test -o ``` I did not see any failures for `testDecommissionWithUCBlocksFeatureDisabledAndDefaultMonitor` I suspect the `testDecommissionWithUCBlocksFeatureDisabledAndDefaultMonitor` failure (when executed by Yetus) may be due to timing condition which only reproduces on the Yetus test runner (such as JVM pause, thread scheduling, or other). Proposing that we do not block this change on this potentially flaky test. - The `testDecommissionWithUCBlocksFeatureDisabledAndDefaultMonitor` test shows that when `dfs.namenode.decommission.track.underconstructionblocks = false`, there will be HDFS write failures & HDFS data loss in the majority of cases. - If the test flakiness is not a test issue (i.e. a MiniDFSCluster issue for example), then it would only mean that sporadically there might not be HDFS write failures & HDFS data loss (which is technically a good thing). Also, about 1 in 5 test runs I was seeing failures in `testDecommissionWithUCBlocksFeatureEnabledAndBackoffMonitor`. I have root caused this & made some minor changes in latest commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
