[
https://issues.apache.org/jira/browse/HDFS-16456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
caozhiqiang updated HDFS-16456:
-------------------------------
Description:
In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
# Enable EC policy, such as RS-6-3-1024k.
# The rack number in this cluster is equal with or less than the replication
number(9)
# A rack only has one DN, and decommission this DN.
The root cause is in
BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will
give a limit parameter maxNodesPerRack for choose targets. In this scenario,
the maxNodesPerRack is 1, which means each rack can only be chosen one datanode.
{code:java}
protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
...
// If more replicas than racks, evenly spread the replicas.
// This calculation rounds up.
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
return new int[] {numOfReplicas, maxNodesPerRack};
} {code}
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
here will be called, where totalNumOfReplicas=9 and numOfRacks=9
When we decommission one dn which is only one node in its rack, the
chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder()
will throw NotEnoughReplicasException, but the exception will not be caught and
fail to fallback to chooseEvenlyFromRemainingRacks() function.
When decommission, after choose targets, verifyBlockPlacement() function will
return the total rack number contains the invalid rack, and
BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false and
it will also cause decommission fail.
{code:java}
public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
int numberOfReplicas) {
if (locs == null)
locs = DatanodeDescriptor.EMPTY_ARRAY;
if (!clusterMap.hasClusterEverBeenMultiRack()) {
// only one rack
return new BlockPlacementStatusDefault(1, 1, 1);
}
// Count locations on different racks.
Set<String> racks = new HashSet<>();
for (DatanodeInfo dn : locs) {
racks.add(dn.getNetworkLocation());
}
return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
clusterMap.getNumOfRacks());
} {code}
{code:java}
public boolean isPlacementPolicySatisfied() {
return requiredRacks <= currentRacks || currentRacks >= totalRacks;
}{code}
According to the above description, we should make the below modify to fix it:
# In startDecommission() or stopDecommission(), we should also change the
numOfRacks in class NetworkTopology. Or choose targets may fail for the
maxNodesPerRack is too small. And even choose targets success,
isPlacementPolicySatisfied will also return false cause decommission fail.
# In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first
chooseOnce() function should also be put in try..catch..., or it will not
fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
# In verifyBlockPlacement, we need to remove invalid racks from total
numOfRacks, or isPlacementPolicySatisfied() will return false and cause fail to
reconstruct data.
was:
In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
# Enable EC policy, such as RS-6-3-1024k.
# The rack number in this cluster is equal with or less than the replication
number(9)
# A rack only has one DN, and decommission this DN.
The root cause is in
BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will
give a limit parameter maxNodesPerRack for choose targets. In this scenario,
the maxNodesPerRack is 1, which means each rack can only be chosen one datanode.
{code:java}
protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
...
// If more replicas than racks, evenly spread the replicas.
// This calculation rounds up.
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
return new int[] {numOfReplicas, maxNodesPerRack};
} {code}
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
here will be called, where totalNumOfReplicas=9 and numOfRacks=9
When we decommission one dn which is only one node in its rack, the
chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder()
will throw NotEnoughReplicasException, but the exception will not be caught and
fail to fallback to chooseEvenlyFromRemainingRacks() function.
When decommission, after choose targets, verifyBlockPlacement() function will
return the total rack number contains the invalid rack, and
BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false and
it will also cause decommission fail.
{code:java}
public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
int numberOfReplicas) {
if (locs == null)
locs = DatanodeDescriptor.EMPTY_ARRAY;
if (!clusterMap.hasClusterEverBeenMultiRack()) {
// only one rack
return new BlockPlacementStatusDefault(1, 1, 1);
}
// Count locations on different racks.
Set<String> racks = new HashSet<>();
for (DatanodeInfo dn : locs) {
racks.add(dn.getNetworkLocation());
}
return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
clusterMap.getNumOfRacks());
} {code}
{code:java}
public boolean isPlacementPolicySatisfied() {
return requiredRacks <= currentRacks || currentRacks >= totalRacks;
}{code}
According to the above description, we should make the below modify to fix it:
# In startDecommission() or stopDecommission(), we should also change the
numOfRacks in class NetworkTopology. Or choose targets may fail for the
maxNodesPerRack is too small. And even choose targets success,
isPlacementPolicySatisfied will also return false cause decommission fail.
# In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first
chooseOnce() function should also be put in try..catch..., or it will not
fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
# In chooseEvenlyFromRemainingRacks(), this numResultsOflastChoose =
results.size(); code should be move to after chooseOnce(), or it will throw
lastException and make choose targets failed.
> EC: Decommission a rack with only on dn will fail when the rack number is
> equal with replication
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-16456
> URL: https://issues.apache.org/jira/browse/HDFS-16456
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ec, namenode
> Affects Versions: 3.4.0
> Reporter: caozhiqiang
> Priority: Critical
> Attachments: HDFS-16456.001.patch, HDFS-16456.002.patch
>
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
> # Enable EC policy, such as RS-6-3-1024k.
> # The rack number in this cluster is equal with or less than the replication
> number(9)
> # A rack only has one DN, and decommission this DN.
> The root cause is in
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will
> give a limit parameter maxNodesPerRack for choose targets. In this scenario,
> the maxNodesPerRack is 1, which means each rack can only be chosen one
> datanode.
> {code:java}
> protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
> ...
> // If more replicas than racks, evenly spread the replicas.
> // This calculation rounds up.
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> return new int[] {numOfReplicas, maxNodesPerRack};
> } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and numOfRacks=9
> When we decommission one dn which is only one node in its rack, the
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder()
> will throw NotEnoughReplicasException, but the exception will not be caught
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will
> return the total rack number contains the invalid rack, and
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false
> and it will also cause decommission fail.
> {code:java}
> public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
> int numberOfReplicas) {
> if (locs == null)
> locs = DatanodeDescriptor.EMPTY_ARRAY;
> if (!clusterMap.hasClusterEverBeenMultiRack()) {
> // only one rack
> return new BlockPlacementStatusDefault(1, 1, 1);
> }
> // Count locations on different racks.
> Set<String> racks = new HashSet<>();
> for (DatanodeInfo dn : locs) {
> racks.add(dn.getNetworkLocation());
> }
> return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
> clusterMap.getNumOfRacks());
> } {code}
> {code:java}
> public boolean isPlacementPolicySatisfied() {
> return requiredRacks <= currentRacks || currentRacks >= totalRacks;
> }{code}
> According to the above description, we should make the below modify to fix it:
> # In startDecommission() or stopDecommission(), we should also change the
> numOfRacks in class NetworkTopology. Or choose targets may fail for the
> maxNodesPerRack is too small. And even choose targets success,
> isPlacementPolicySatisfied will also return false cause decommission fail.
> # In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first
> chooseOnce() function should also be put in try..catch..., or it will not
> fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
> # In verifyBlockPlacement, we need to remove invalid racks from total
> numOfRacks, or isPlacementPolicySatisfied() will return false and cause fail
> to reconstruct data.
>
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]