Thank you, Eric, thank you, Bibek. /etc/hosts was part of the problem, and then after some re-install commands it just started working :)
Pleasure == working Hadoop cluster (even if it is pseudo-pleasure) Sincerely, Mark On Wed, Mar 2, 2011 at 5:09 PM, Bibek Paudel <[email protected]> wrote: > On Thu, Mar 3, 2011 at 12:08 AM, Eric Sammer <[email protected]> wrote: > > Check your /etc/hosts file and make sure the hostname of the machine is > not > > on the loopback device. This is almost always the cause of this. > > > > +1 > > -b > > > On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <[email protected]> > wrote: > > > >> Hi, > >> > >> I am doing a pseudo-distributed mode on my laptop, following the same > steps > >> I used for all configurations on my regular cluster, but I get this > error > >> > >> 2011-03-02 16:45:13,651 INFO > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred > ip=/ > >> 192.168.1.150 cmd=delete > >> > src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null > >> perm=null > >> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying > connect > >> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s). > >> > >> so it should be connecting to 192.168.1.150, and it is instead > connecting > >> to > >> 127.0.1.1 - where does this ip come from? > >> > >> Thank you, > >> Mark > >> > > > > > > > > -- > > Eric Sammer > > twitter: esammer > > data: www.cloudera.com > > >
