I was able to start some services, but Yarn is failing with org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed on local exception: java.net.SocketException: Unresolved address; Host Details : local host is: "telles-hadoop-two"; destination host is: (unknown):0.
Just to give an overview of my setup. I have 6 machines, they can talk to each other passwordless. One is the master with NameNode and a second master will run ResourceManager. The slaves will run NodeManager and DataNode. NameNode and DataNodes are ok. ResourceManager is still failing. On Tue Jan 27 2015 at 16:49:24 Telles Nobrega <[email protected]> wrote: > Thanks. > > On Tue Jan 27 2015 at 15:59:35 Ahmed Ossama <[email protected]> wrote: > >> Hi Telles, >> >> No, the documentation isn't out of date. Normally hadoop configuration >> files are placed under /etc/hadoop/conf, it then referenced to when >> starting the cluster with --config $HADOOP_CONF_DIR, this is how hdfs >> and yarn know their configuration. >> >> Second, it's not a good practice to run hadoop with root. What you want >> to do is something like this >> >> # useradd hdfs >> # useradd yarn >> # groupadd hadoop >> # usermod -a -Ghadoop hdfs >> # usermod -a -Ghadoop yarn >> # mkdir /hdfs/{nn,dn} >> # chown -R hdfs:hadoop /hdfs >> >> Then start your hdfs daemon with hdfs user, and yarn daemon with yarn >> user. >> >> >> On 01/27/2015 08:40 PM, Telles Nobrega wrote: >> >> Hi, I'm starting to deply Hadoop 2.6.0 multi node. >> My first question is: >> In the documenation page, it says that the configuration files are under >> conf/ but I found them in etc/. Should I move them to conf or is this just >> out of date information? >> >> My second question is regarding users permission, I tried installing >> before but I was only able to start deamons running as root, is that how it >> should be? >> >> For now these are all the question I have. >> >> Thanks >> >> >> -- >> Regards, >> Ahmed Ossama >> >>
