On Tue, Feb 12, 2013 at 11:43 PM, Robert Molina wrote:
> to do it, there should be some information he
this is best way to remove data node from a cluster. you have done the
right thing.
∞
Shashwat Shriparv
The decommissioning process is controlled by an exclude file, which for
HDFS is set by the* dfs.hosts.exclude* property, and for MapReduce by
the*mapred.hosts.exclude
* property. In most cases, there is one shared file,referred to as the
exclude file.This exclude file name should be specified a
Hi,
I would like to add another scenario. What are the steps for removing a
dead node when the server had a hard failure that is unrecoverable.
Thanks,
Ben
On Tuesday, February 12, 2013 7:30:57 AM UTC-8, sudhakara st wrote:
>
> The decommissioning process is controlled by an exclude file, which
Hi Dhanasekaran,
I believe you are trying to ask if it is recommended to use the
decommissioning feature to remove datanodes from your cluster, the answer
would be yes. As far as how to do it, there should be some information
here http://wiki.apache.org/hadoop/FAQ that should help.
Regards,
Rober