As Amareshwari said, you can almost safely stop TaskTracker process on node. Task(s) running on that would be considered failed and would be re-executed by JobTracker on another node. Reason why we decomission DataNode is to protect against data loss. DataNode stores HDFS blocks, by decomissioning you would be asking NameNode to copy over the block is has to some other datanode.
Thanks, Lohit ----- Original Message ---- From: Amareshwari Sriramadasu <[EMAIL PROTECTED]> To: [email protected] Sent: Tuesday, November 25, 2008 11:51:21 PM Subject: Re: how can I decommission nodes on-the-fly? Jeremy Chow wrote: > Hi list, > > I added a property dfs.hosts.exclude to my conf/hadoop-site.xml. Then > refreshed my cluster with command > bin/hadoop dfsadmin -refreshNodes > It showed that it can only shut down the DataNode process but not included > the TaskTracker process on each slaver specified in the excludes file. > Presently, decommissioning TaskTracker on-the-fly is not available. > The jobtracker web still show that I hadnot shut down these nodes. > How can i totally decommission these slaver nodes on-the-fly? Is it can be > achieved only by operation on the master node? > > I think one way to shutdown a TaskTracker is to kill it. Thanks Amareshwari > Thanks, > Jeremy > >
