You can take a look at start-dfs.sh to see what it does.

Pretty much : $ ssh datanode 'cd dir; bin/hadoop-daemon.sh start datanode'

You are strongly encouraged to experiment with the scripts to see what they do. When something does not seem to work well, check corresponding log file in logs/ directory as well.

The documentation says to start DFS from the namenode which will startup all
the datanodes.

This is for the simple, common case.

Raghu.

Thanks,
Ankur

-----Original Message-----
From: Raghu Angadi [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 17, 2007 1:33 PM
To: [email protected]
Subject: Re: adding datanodes on the fly?

Ankur Sethi wrote:
How are datanodes added?  Do they get added and started only at start of
DFS
filesystem?  Can they be added while hadoop fs is running by editing slaves
file or does hadoop have to be restarted?

to add more data nodes, you can just bring up new datanodes with the right config anytime. Namenode can add them anytime.

'slaves' file is used only by the scripts like bin/start-dfs.sh, bin/stop-dfs.sh etc. So adding new datanodes to slaves helps you manage restarts etc easier.

Raghu.

Reply via email to