Actually I would recommend avoiding /etc/hosts and using DNS if this is going to be a production grade cluster...
Sent from a remote device. Please excuse any typos... Mike Segel On Dec 17, 2011, at 5:40 AM, alo alt <[email protected]> wrote: > Hi, > > in the slave - file too. /etc/hosts is also recommend to avoid DNS > issues. After adding in slaves the new node has to be started and > should quickly appear in the web-ui. If you don't need the nodes all > time you can setup a exclude and refresh your cluster > (http://wiki.apache.org/hadoop/FAQ#I_want_to_make_a_large_cluster_smaller_by_taking_out_a_bunch_of_nodes_simultaneously._How_can_this_be_done.3F) > > - Alex > > On Sat, Dec 17, 2011 at 12:06 PM, madhu phatak <[email protected]> wrote: >> Hi, >> I am trying to add nodes dynamically to a running hadoop cluster.I started >> tasktracker and datanode in the node. It works fine. But when some node >> try fetch values ( for reduce phase) it fails with unknown host exception. >> When i add a node to running cluster do i have to add its hostname to all >> nodes (slaves +master) /etc/hosts file? Or some other way is there? >> >> >> -- >> Join me at http://hadoopworkshop.eventbrite.com/ > > > > -- > Alexander Lorenz > http://mapredit.blogspot.com > > P Think of the environment: please don't print this email unless you > really need to. >
