sodonnel commented on pull request #3675:
URL: https://github.com/apache/hadoop/pull/3675#issuecomment-975769423


   In `DatanodeManager.registerDatanode()`, it has logic to add a node into the 
decommission workflow if the node has a DECOMMISSIONED status in the hosts / 
combined hosts file, as it calls `startAdminOperationIfNecessary` which looks 
like:
   
   ```
   void startAdminOperationIfNecessary(DatanodeDescriptor nodeReg) {
       long maintenanceExpireTimeInMS =
           hostConfigManager.getMaintenanceExpirationTimeInMS(nodeReg);
       // If the registered node is in exclude list, then decommission it
       if (getHostConfigManager().isExcluded(nodeReg)) {
         datanodeAdminManager.startDecommission(nodeReg);
       } else if (nodeReg.maintenanceNotExpired(maintenanceExpireTimeInMS)) {
         datanodeAdminManager.startMaintenance(nodeReg, 
maintenanceExpireTimeInMS);
       }
     }
   ```
   If a DN goes dead and then re-registers, it will be added back into the 
pending nodes, so I don't think we need to continue to track it in the 
decommission monitor when it goes dead. We can just handle the dead event in 
the decommission monitor and stop tracking it, clearing a slot for another 
node. Then it will be re-added if it comes back by existing logic above.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to