ChenSammi commented on a change in pull request #904: HDDS-1637. Fix random
test failure TestSCMContainerPlacementRackAware.
URL: https://github.com/apache/hadoop/pull/904#discussion_r290585240
##########
File path:
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
##########
@@ -82,7 +82,7 @@ public void setup() {
when(nodeManager.getNodeStat(anyObject()))
.thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 0L, 100L));
when(nodeManager.getNodeStat(datanodes.get(2)))
- .thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 90L, 10L));
+ .thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 90L, 20L));
Review comment:
@xiaoyuyao, currently I favor 2. The chooseRandom node logic in
NetworktopologyImpl now only has the information of toplogy. It cannot make
decision which node has enough capacity since it doesn't have the knowledge of
disk free space so far. So I will go with solution 2 now.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]