liubingxing opened a new pull request, #4295: URL: https://github.com/apache/hadoop/pull/4295
The SPS may have misjudged in the following scenario: 1. Create a file with one block and this block have 3 replication with **DISK** type [DISK, DISK, DISK]. 2. Set this file with **ALL_SSD** storage policy. 3. The replication of this file may become [DISK, DISK, **SSD**, DISK] with **decommission**. 4. Set this file with **HOT** storage policy and satisfy storage policy on this file. 5. The replication finally look like [DISK, DISK, SSD] not [DISK, DISK, DISK] after decommissioned node offline. 6. The reason is that SPS get the block replications by FileStatus.getReplication() which is not the real num of the block.  So this block will be ignored, because it have 3 replications with DISK type already ( one replication in a decommissioning node)  I think we can use blockInfo.getLocations().length to count the replication of block instead of FileStatus.getReplication(). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
