Thanks Matt.  That makes sense.  I will read up on those topics.

On Tue, May 10, 2011 at 12:02 PM, GOEKE, MATTHEW [AG/1000] <
matthew.go...@monsanto.com> wrote:

> Keith if you have a chance you might want to look at Hadoop: The
> Definitive guide or other various faqs around for rolling a cluster from
> tarball. One thing that most recommend is to setup a hadoop user and
> then to chown all of the files / directories it needs over to it. Right
> now what you are running into is that you have not chown'ed the folder
> to your user or correctly chmod'ed the directories.
>
> User / Group permissions will be become increasingly important when you
> move into DFS setup so it is important to get the core setup correctly.
>
> Matt
>
> -----Original Message-----
> From: Keith Thompson [mailto:kthom...@binghamton.edu]
> Sent: Tuesday, May 10, 2011 10:54 AM
> To: common-user@hadoop.apache.org
> Subject: Re: problems with start-all.sh
>
> Thanks for catching that comma.  It was actually my HADOOP_CONF_DIR
> rather
> than HADOOP_HOME that was the culprit. :)
> As for sudo ... I am not sure how to run it as a regular user.  I set up
> ssh
> for a passwordless login (and am able to ssh localhost without password)
> but
> I installed hadoop to /usr/local so every time I try to run it, it says
> permission denied. So, I have to run hadoop using sudo (and it prompts
> for
> password as super user).  I should have installed hadoop to my home
> directory instead I guess ... :/
>
> On Tue, May 10, 2011 at 11:47 AM, Luca Pireddu <pire...@crs4.it> wrote:
>
> > On May 10, 2011 17:39:12 Keith Thompson wrote:
> > > Hi Luca,
> > >
> > > Thank you.  That worked ... at least I didn't get the same error.
> Now I
> > > get:
> > >
> > > k_thomp@linux-8awa:/usr/local/hadoop-0.20.2> sudo bin/start-all.sh
> > > starting namenode, logging to
> > >
> /usr/local/hadoop-0.20.2/bin/../logs/hadoop-root-namenode-linux-8awa.out
> > > cat: /usr/local/hadoop-0,20.2/conf/slaves: No such file or directory
> > > Password:
> > > localhost: starting secondarynamenode, logging to
> > >
> >
> /usr/local/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-linux
> -8a
> > > wa.out localhost: Exception in thread "main"
> > java.lang.NullPointerException
> > > localhost:      at
> > > org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
> > > localhost:      at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java
> :15
> > > 6) localhost:      at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java
> :16
> > > 0) localhost:      at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(Seco
> nda
> > > ryNameNode.java:131) localhost:      at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(Secondar
> yNa
> > > meNode.java:115) localhost:      at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryN
> ame
> > > Node.java:469) starting jobtracker, logging to
> > >
> >
> /usr/local/hadoop-0.20.2/bin/../logs/hadoop-root-jobtracker-linux-8awa.o
> ut
> > > cat: /usr/local/hadoop-0,20.2/conf/slaves: No such file or directory
> >
> > Don't try to run it as root with "sudo".  Just run it as your regular
> user.
> > If you try to run it as a different user then you'll have to set up
> the ssh
> > keys for that user (notice the "Password" prompt because ssh was
> unable to
> > perform a password-less login into localhost).
> >
> > Also, make sure you've correctly set HADOOP_HOME to the path where you
> > extracted the Hadoop archive.  I'm seeing a comma in the path shown in
> the
> > error ("/usr/local/hadoop-0,20.2/conf/slaves") that probably shouldn't
> be
> > there :-)
> >
> >
> > --
> > Luca Pireddu
> > CRS4 - Distributed Computing Group
> > Loc. Pixina Manna Edificio 1
> > Pula 09010 (CA), Italy
> > Tel:  +39 0709250452
> >
> This e-mail message may contain privileged and/or confidential information,
> and is intended to be received only by persons entitled
> to receive such information. If you have received this e-mail in error,
> please notify the sender immediately. Please delete it and
> all attachments from any servers, hard drives or any other media. Other use
> of this e-mail by you is strictly prohibited.
>
> All e-mails and attachments sent and received are subject to monitoring,
> reading and archival by Monsanto, including its
> subsidiaries. The recipient of this e-mail is solely responsible for
> checking for the presence of "Viruses" or other "Malware".
> Monsanto, along with its subsidiaries, accepts no liability for any damage
> caused by any such code transmitted by or accompanying
> this e-mail or any attachment.
>
>
> The information contained in this email may be subject to the export
> control laws and regulations of the United States, potentially
> including but not limited to the Export Administration Regulations (EAR)
> and sanctions regulations issued by the U.S. Department of
> Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this
> information you are obligated to comply with all
> applicable U.S. export laws and regulations.
>
>

Reply via email to