The best way to resolve an argument is to look at the code:
*/**
* Rereads the config to get hosts and exclude list file names.
* Rereads the files to update the hosts and exclude lists. It
* checks if any of the hosts have changed states:
* 1. Added to hosts --> no further work needed here.
* 2. Removed from hosts --> mark AdminState as decommissioned.
* 3. Added to exclude --> start decommission.
* 4. Removed from exclude --> stop decommission.
*/
public void refreshNodes(Configuration conf) throws IOException {
checkSuperuserPrivilege();
// Reread the config to get dfs.hosts and dfs.hosts.exclude filenames.
// Update the file names and refresh internal includes and excludes list
if (conf == null)
conf = new Configuration();
hostsReader.updateFileNames(conf.get("dfs.hosts",""),
conf.get("dfs.hosts.exclude", ""));
hostsReader.refresh();
synchronized (this) {
for (Iterator<DatanodeDescriptor> it =
datanodeMap.values().iterator();
it.hasNext();) {
DatanodeDescriptor node = it.next();
// Check if not include.
if (!inHostsList(node, null)) {
node.setDecommissioned(); // case 2.
} else {
if (inExcludedHostsList(node, null)) {
if (!node.isDecommissionInProgress() &&
!node.isDecommissioned()) {
startDecommission(node); // case 3.
}
} else {
if (node.isDecommissionInProgress() ||
node.isDecommissioned()) {
stopDecommission(node); // case 4.
}
}
}
}
}
}*
The machine is already dead, so there is no point in decomissioning. HDFS
will still replicate the blocks as it is risky to function under a reduced
replication factor.
There may still be an argument whether it makes sense to physically move the
blocks...
Alex K
On Fri, Apr 23, 2010 at 2:20 PM, Allen Wittenauer
<[email protected]>wrote:
>
> On Apr 23, 2010, at 1:56 PM, Alex Kozlov wrote:
>
> > I think Raymond says that the machine is already dead...
>
> Right. But he wants to re-add it later. So dfs.exclude is still a better
> way to go. dfs.hosts, iirc, doesn't get re-read so it would require a nn
> bounce to clear.
>
>