[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks
[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-14450: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Close this one as a dup. Thanks [~ferhui] for confirmation. And thanks [~wuweiwei] for raising the issue. > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ec >Reporter: Wu Weiwei >Assignee: Wu Weiwei >Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks
[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-14450: - Component/s: ec > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ec >Reporter: Wu Weiwei >Assignee: Wu Weiwei >Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks
[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-14450: --- Status: Patch Available (was: Open) > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wu Weiwei >Assignee: Wu Weiwei >Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks
[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wu Weiwei updated HDFS-14450: - Attachment: HDFS-14450-000.patch > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wu Weiwei >Assignee: Wu Weiwei >Priority: Major > Attachments: HDFS-14450-000.patch > > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks
[ https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wu Weiwei updated HDFS-14450: - Summary: Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks (was: Erasure Coding: decommissionning datanodes cause replicate a large number of duplicate EC internal blocks) > Erasure Coding: decommissioning datanodes cause replicate a large number of > duplicate EC internal blocks > > > Key: HDFS-14450 > URL: https://issues.apache.org/jira/browse/HDFS-14450 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wu Weiwei >Assignee: Wu Weiwei >Priority: Major > > {code:java} > // [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in > need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All > required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], replicationFallbacks=[ARCHIVE]} > {code} > In a large-scale cluster, decommissioning large-scale datanodes cause EC > block groups to replicate a large number of duplicate internal blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org