[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wu Weiwei updated HDFS-14450: ----------------------------- Attachment: HDFS-14450-000.patch > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > -------------------------------------------------------------------------------------------------------- > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Wu Weiwei > Assignee: Wu Weiwei > Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian JIRA (v7.6.14#76016) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org