after moving the includes and excludes file from /root/ to
$HADOOP_HOME/conf

problem resolved. really strange ....

在 2012年4月10日 上午10:45,air <cnwe...@gmail.com>写道:

>
>
> ---------- 已转发邮件 ----------
> 发件人: air <cnwe...@gmail.com>
> 日期: 2012年4月10日 上午9:46
> 主题: CDH3u3 problem when decommission nodes
> 收件人: CDH Users <cdh-u...@cloudera.org>
>
>
>
> all operations on the (JT + NN) node.
>
> I create a file with all (DN + TT) nodes listed in it called  hosts.include
> I also create a file with all the nodes need to be decommissioned in it
> called hosts.exclude
>
> and then I setting the* mapred.hosts* and *dfs.hosts* to *hosts.include*and 
> set
> * mapred.hosts.exclude* and* dfs.hosts.exclude* to *hosts.exclude*
>
> and then I restart he JT
>
> and then refresh nodes with sudo -u hdfs hadoop dfsadmin -refreshNodes
>
> after that, there is no effect, the cluster still there without any
> changes.
>
> do I forget something ? I successfully decommissioned 3 nodes last week
> through the same way , but, why it has no effect today ?
>
> thank you for your reply in advance !
> --
> 不学习,不知道
>
>
>
>
> --
> 不学习,不知道
>
>


-- 
不学习,不知道

Reply via email to