[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745858#comment-15745858 ] Taklon Stephen Wu commented on HDFS-8718: - +1 > Block replicating cannot work after upgrading to 2.7 > - > > Key: HDFS-8718 > URL: https://issues.apache.org/jira/browse/HDFS-8718 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Bing Jiang > > Decommission a datanode from hadoop, and hdfs can calculate the correct > number of blocks to be replicated from web-ui. > {code} > Decomissioning > Node Last contactUnder replicated blocks Blocks with no live replicas > Under Replicated Blocks > In files under construction > TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 > {code} > From NN's log, the work of block replicating cannot be enforced due to > inconsistent expected storage type. > {code} > Node /default/rack_02/172.22.49.5:50010 [ > Storage > [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > ] > 2015-07-07 16:00:22,032 WARN > org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough > replicas: expected size is 1 but onl > y 0 storage types can be selected (replication=3, selected=[], > unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, > storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) > 2015-07-07 16:00:22,032 WARN > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to > place enough replicas, still in n > eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], > replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types > are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP > olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], > replicationFallbacks=[ARCHIVE]} > {code} > We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe > the feature of ARCHIVE STORAGE has been enforced, but how about the block's > storage type after upgrading? > The default BlockStoragePolicy is hot, and I guess those blocks do not have > the correct information bit of BlockStoragePolicy, so it cannot be handled > well. > After I shutdown the datanode, the under-replicated blocks can be asked to > copy. So the workaround is to shutdown the datanode. > Could anyone take a look at the issue? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303438#comment-15303438 ] He Xiaoqiao commented on HDFS-8718: --- hi [~jianbginglover] and [~kanaka], ReplicationMonitor stuck for long time since *Global Lock*, and this caused block replicating could not work as expected. I create new issue [HDFS-10453|https://issues.apache.org/jira/browse/HDFS-10453] to describe this problem in detail and upload patch with solution. > Block replicating cannot work after upgrading to 2.7 > - > > Key: HDFS-8718 > URL: https://issues.apache.org/jira/browse/HDFS-8718 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Bing Jiang > > Decommission a datanode from hadoop, and hdfs can calculate the correct > number of blocks to be replicated from web-ui. > {code} > Decomissioning > Node Last contactUnder replicated blocks Blocks with no live replicas > Under Replicated Blocks > In files under construction > TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 > {code} > From NN's log, the work of block replicating cannot be enforced due to > inconsistent expected storage type. > {code} > Node /default/rack_02/172.22.49.5:50010 [ > Storage > [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > ] > 2015-07-07 16:00:22,032 WARN > org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough > replicas: expected size is 1 but onl > y 0 storage types can be selected (replication=3, selected=[], > unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, > storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) > 2015-07-07 16:00:22,032 WARN > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to > place enough replicas, still in n > eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], > storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], > creationFallbacks=[], > replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types > are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP > olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], > replicationFallbacks=[ARCHIVE]} > {code} > We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe > the feature of ARCHIVE STORAGE has been enforced, but how about the block's > storage type after upgrading? > The default BlockStoragePolicy is hot, and I guess those blocks do not have > the correct information bit of BlockStoragePolicy, so it cannot be handled > well. > After I shutdown the datanode, the under-replicated blocks can be asked to > copy. So the workaround is to shutdown the datanode. > Could anyone take a look at the issue? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268067#comment-15268067 ] He Xiaoqiao commented on HDFS-8718: --- i met the same problem after upgraded the cluster to 2.7.1, and never config ARCHIVE storage policy. {code:xml} 2016-04-19 10:20:48,083 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy ... 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 7 but only 0 storage types can be selected (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2016-04-19 10:21:17,184 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 7 to reach 10 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} {code} above NN log is all about ReplicationMonitor process one {{ReplicationWork}}, it depicts that {{ReplicationWork}} can not choose any proper targets which StorageType should be DISK although traverse all DN of the Cluster. then {{DISK}} is added to {{unavailableStorages}}, the next loop {{ARCHIVE}} is added to {{unavailableStorages}} because there is no ARCHIVE storage. After that throw NotEnoughReplicasException. The core Question is *WHY it can NOT choose any proper datanode as target in {{ReplicationWork}} successfully, even if there are thousand DNs in the cluster*. > Block replicating cannot work after upgrading to 2.7 > - > > Key: HDFS-8718 > URL: https://issues.apache.org/jira/browse/HDFS-8718 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Bing Jiang > > Decommission a datanode from hadoop, and hdfs can calculate the correct > number of blocks to be replicated from web-ui. > {code} > Decomissioning > Node Last contactUnder replicated blocks Blocks with no live replicas > Under Replicated Blocks > In files under construction > TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 > {code} > From NN's log, the work of block replicating cannot be enforced due to > inconsistent expected storage type. > {code} > Node /default/rack_02/172.22.49.5:50010 [ > Storage > [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage > [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not > chosen since storage types do not match, where the required storage type is > ARCHIVE. > Storage >
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618740#comment-14618740 ] kanaka kumar avvaru commented on HDFS-8718: --- Hi [~jianbginglover], I think this log must be preceded with some other log message which looks like {code} Failed to place enough replicas, still in need of X to reach Y (unavailableStorages=[DISK, ARCHIVE] , storagePolicy={HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true/false) {code} If possible please share NN logs with may give clue on root cause. Also, please confirm both the machines {{172.22.49.3 and 172.22.49.5}} are in same rack or not Block replicating cannot work after upgrading to 2.7 - Key: HDFS-8718 URL: https://issues.apache.org/jira/browse/HDFS-8718 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Bing Jiang Decommission a datanode from hadoop, and hdfs can calculate the correct number of blocks to be replicated from web-ui. {code} Decomissioning Node Last contactUnder replicated blocks Blocks with no live replicas Under Replicated Blocks In files under construction TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 {code} From NN's log, the work of block replicating cannot be enforced due to inconsistent expected storage type. {code} Node /default/rack_02/172.22.49.5:50010 [ Storage [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. ] 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but onl y 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in n eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} {code} We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe the feature of ARCHIVE STORAGE has been enforced, but how about the block's storage type after upgrading? The default BlockStoragePolicy is hot, and I guess those blocks do not have the correct information bit of BlockStoragePolicy, so it cannot be handled well. After I shutdown the datanode, the under-replicated blocks can be asked to copy. So the workaround is to
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619826#comment-14619826 ] Bing Jiang commented on HDFS-8718: -- Meanwhile, there are some DEBUG logs: {code} 2015-07-09 11:33:05,406 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to choose from local rack (location = /default/rack_03), retry with the rack of the next replica (location = /default/rack_02) org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:690) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:605) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:511) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:362) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:213) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:110) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:3718) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$200(BlockManager.java:3683) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1407) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1313) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3654) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3606) at java.lang.Thread.run(Thread.java:745) {code} Block replicating cannot work after upgrading to 2.7 - Key: HDFS-8718 URL: https://issues.apache.org/jira/browse/HDFS-8718 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Bing Jiang Decommission a datanode from hadoop, and hdfs can calculate the correct number of blocks to be replicated from web-ui. {code} Decomissioning Node Last contactUnder replicated blocks Blocks with no live replicas Under Replicated Blocks In files under construction TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 {code} From NN's log, the work of block replicating cannot be enforced due to inconsistent expected storage type. {code} Node /default/rack_02/172.22.49.5:50010 [ Storage [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not chosen since
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619778#comment-14619778 ] Bing Jiang commented on HDFS-8718: -- [~kanaka] You are right. The two nodes are not in the same rack, and actually 172.22.49.3 has been configured in one rack without any other nodes. {code} === bh@TS-BHTEST-01 hadoop $ hdfs dfsadmin -printTopology Rack: /default/rack_02 172.22.49.2:50010 (TS-BHTEST-02) 172.22.49.4:50010 (TS-BHTEST-04) 172.22.49.5:50010 (TS-BHTEST-05) 172.22.49.6:50010 (TS-BHTEST-06) 172.22.49.7:50010 (TS-BHTEST-07) Rack: /default/rack_03 172.22.49.3:50010 (TS-BHTEST-03) {code} Block replicating cannot work after upgrading to 2.7 - Key: HDFS-8718 URL: https://issues.apache.org/jira/browse/HDFS-8718 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Bing Jiang Decommission a datanode from hadoop, and hdfs can calculate the correct number of blocks to be replicated from web-ui. {code} Decomissioning Node Last contactUnder replicated blocks Blocks with no live replicas Under Replicated Blocks In files under construction TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 {code} From NN's log, the work of block replicating cannot be enforced due to inconsistent expected storage type. {code} Node /default/rack_02/172.22.49.5:50010 [ Storage [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. ] 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but onl y 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in n eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} {code} We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe the feature of ARCHIVE STORAGE has been enforced, but how about the block's storage type after upgrading? The default BlockStoragePolicy is hot, and I guess those blocks do not have the correct information bit of BlockStoragePolicy, so it cannot be handled well. After I shutdown the datanode, the under-replicated blocks can be asked to copy. So the workaround is to shutdown the datanode. Could
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616936#comment-14616936 ] Arpit Agarwal commented on HDFS-8718: - Hi [~jianbginglover], do you have ARCHIVE storage policy configured on any files/directories? Also are any of your DataNodes configured with ARCHIVE storage? Block replicating cannot work after upgrading to 2.7 - Key: HDFS-8718 URL: https://issues.apache.org/jira/browse/HDFS-8718 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Bing Jiang Decommission a datanode from hadoop, and hdfs can calculate the correct number of blocks to be replicated from web-ui. {code} Decomissioning Node Last contactUnder replicated blocks Blocks with no live replicas Under Replicated Blocks In files under construction TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 {code} From NN's log, the work of block replicating cannot be enforced due to inconsistent expected storage type. {code} Node /default/rack_02/172.22.49.5:50010 [ Storage [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. ] 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but onl y 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in n eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} {code} We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe the feature of ARCHIVE STORAGE has been enforced, but how about the block's storage type after upgrading? The default BlockStoragePolicy is hot, and I guess those blocks do not have the correct information bit of BlockStoragePolicy, so it cannot be handled well. After I shutdown the datanode, the under-replicated blocks can be asked to copy. So the workaround is to shutdown the datanode. Could anyone take a look at the issue? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7
[ https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617750#comment-14617750 ] Bing Jiang commented on HDFS-8718: -- No, I have upgraded from 2.5. I have not done any modification on hdfs-site.xml. Block replicating cannot work after upgrading to 2.7 - Key: HDFS-8718 URL: https://issues.apache.org/jira/browse/HDFS-8718 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Bing Jiang Decommission a datanode from hadoop, and hdfs can calculate the correct number of blocks to be replicated from web-ui. {code} Decomissioning Node Last contactUnder replicated blocks Blocks with no live replicas Under Replicated Blocks In files under construction TS-BHTEST-03:50010 (172.22.49.3:50010)25641 0 0 {code} From NN's log, the work of block replicating cannot be enforced due to inconsistent expected storage type. {code} Node /default/rack_02/172.22.49.5:50010 [ Storage [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. Storage [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not chosen since storage types do not match, where the required storage type is ARCHIVE. ] 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but onl y 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2015-07-07 16:00:22,032 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in n eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storageP olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} {code} We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe the feature of ARCHIVE STORAGE has been enforced, but how about the block's storage type after upgrading? The default BlockStoragePolicy is hot, and I guess those blocks do not have the correct information bit of BlockStoragePolicy, so it cannot be handled well. After I shutdown the datanode, the under-replicated blocks can be asked to copy. So the workaround is to shutdown the datanode. Could anyone take a look at the issue? -- This message was sent by Atlassian JIRA (v6.3.4#6332)