[
https://issues.apache.org/jira/browse/YARN-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17789931#comment-17789931
]
ASF GitHub Bot commented on YARN-11614:
---------------------------------------
Carol7102 opened a new pull request, #6299:
URL: https://github.com/apache/hadoop/pull/6299
### Description of PR
#### Description
Flaky tests are common occurrences in open-source projects, yielding
inconsistent results—sometimes passing and sometimes failing—without code
changes. NonDex is a tool for detecting and debugging wrong assumptions on
under-determined Java APIs. I have resolved a flaky test issue using NonDex
tool, specifically in the testGetPrediction() located at
`hadoop/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java`.
#### Root cause
The root cause of the flakiness was due to that each time we sent HTTP GET
request to web resource, the responses may different due to some reasons like
Dynamic Content. In this test, the memory size and virtual cores may differ
when tick is 10 or 15. But the original test assert a hardcode virtual cores
for different memory size. Therefore, it causes flaky tests.
#### Fix
This test has been resolved by calling getVirtualCores() api to get the
actual virtual cores instead of hardcoding the constant number.
This fix is significant because it eliminates the uncertainty introduced by
the HTTP GET request, ensuring that the test consistently passes across
different test runs. By doing so, we have improved the reliability and
stability of our testing suite.
### How was this patch tested?
Java: openjdk version "11.0.20.1"
Maven: Apache Maven 3.6.3
1. Compile the module
`mvn install -pl hadoop-tools/hadoop-resourceestimator -am -DskipTests`
2. Run regular tests
`mvn -pl hadoop-tools/hadoop-resourceestimator test
-Dtest=org.apache.hadoop.resourceestimator.service.TestResourceEstimatorService#testGetPrediction`
3. Run tests with NonDex tool
`mvn -pl hadoop-tools/hadoop-resourceestimator
edu.illinois:nondex-maven-plugin:2.1.1:nondex
-Dtest=org.apache.hadoop.resourceestimator.service.TestResourceEstimatorService#testGetPrediction`
After fix, all tests pass
### For code changes:
- [x] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [ ] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> [Federation] Add Federation PolicyManager Validation Rules
> ----------------------------------------------------------
>
> Key: YARN-11614
> URL: https://issues.apache.org/jira/browse/YARN-11614
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: federation
> Affects Versions: 3.4.0
> Reporter: Shilun Fan
> Assignee: Shilun Fan
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> When entering queue weights in Federation, it is essential to enhance the
> validation rules. If a policy manager does not support weights, a prompt
> should be provided to the user.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]