iwasakims opened a new pull request #2475: URL: https://github.com/apache/hadoop/pull/2475
https://issues.apache.org/jira/browse/HDFS-15672 The setting of testBalancingBlockpoolsWithBlockPoolPolicy is * 2 NameNodes (2 name spaces) * 4 DataNodes (with 500 bytes capacity per node) * blocksize = 100, replication factor = 2 * creating 300 bytes file on both name spaces (6 blocks total) * add 2 DataNodes (with 500 bytes capacity per node) * running balancer If one of the DataNode is chosen for all 6 blocks, no free space is available. The error causes retry of block creation. ``` 2020-11-18 06:01:36,648 [DataXceiver for client DFSClient_NONMAPREDUCE_1983766438_12 at /127.0.0.1:46158 [Receiving block BP-631108559-172.31.197.233-1605679293748:blk_1073741827_1003]] ERROR datanode.DataNode (DataXceiver.java:run(324)) - host1.foo.com:43495:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46158 dst: /127.0.0.1:43495 java.io.IOException: Creating block, no free space available ``` The garbage breaks assertion about total used space. ``` 2020-11-18 06:01:37,361 [Listener at localhost/43281] INFO balancer.Balancer (TestBalancerWithMultipleNameNodes.java:runBalancer(172)) - BALANCER 0: totalUsed=1200, totalCapacity=3000, avg=40.0 2020-11-18 06:01:37,361 [Listener at localhost/43281] INFO balancer.Balancer (TestBalancerWithMultipleNameNodes.java:wait(151)) - WAIT expectedUsedSpace=1200, expectedTotalSpace=3000 ...(snip) 2020-11-18 06:01:47,372 [Listener at localhost/43281] WARN balancer.Balancer (TestBalancerWithMultipleNameNodes.java:wait(161)) - WAIT i=100, s=[3000, 1300] ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
