Inside the hdfs conf,
<property>
<name>dfs.hosts.exclude</name>
<value></value>
<description>Names a file that contains a list of hosts that are
not permitted to connect to the namenode. The full pathname of the
file must be specified. If the value is empty, no hosts are
excluded.</description>
</property>
Point this property at a file containing a list of nodes you want to
decommision. From there, use the command line "hadoop dfsadmin
-refreshNodes".
On Tue, Jul 6, 2010 at 7:31 AM, Some Body <[email protected]> wrote:
> Hi,
>
> Is it possible move all the data blocks off a cluster node and then
> decommision the node?
>
> I'm asking because, now that my MR job is working, I'd like see how things
> scale. I.e.,
> less processing nodes, amount of data (number & size of files, etc.). I
> currently have 8 nodes,
> and am processing 5GB spread across 2000 files.
>
> Alan
>