The SSH-helped slave file approach will not work for the goal of running multiple slave daemons per host, where different configuration directories are instead expected to be used.
You can instead use "hadoop --config custom-dir datanode" to launch them directly. On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <[email protected]> wrote: > Hello friends , > > I have set up multiple datanodes on same machine following the link > http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<a3ef3f6af24e204b812d1d24ccc8d71a03688...@mse16be2.mse16.exchange.ms> > So now i have conf and conf2 both in my hadoop directory. > How should master and slave files of conf and conf2 look like if i want conf > to be master and conf2 to be slave .? > Also how should /etc/hosts file look like ? > Please help me. I am really stuck > > > Regards, > Sindhu -- Harsh J
