/etc/hosts may be buggered as well.  What is the entry for localhost?

On 1/2/08 3:48 PM, "Billy Pearson" <[EMAIL PROTECTED]> wrote:

> 
> 
>> localhost: ssh: localhost: Name or service not known
> 
> that error looks like ssh is not running
> 
> make sure its running and working
> try to shh to localhost from the server
> 
> ssh localhost
> 
> and see if it works.
> 
> Billy
> 
> 
> 
> 
> ----- Original Message -----
> From: "Natarajan, Senthil" <[EMAIL PROTECTED]>
> Newsgroups: gmane.comp.jakarta.lucene.hadoop.user
> To: <hadoop-user-PPu3vs9EauNd/SJB6HiN2Ni2O/[EMAIL PROTECTED]>
> Sent: Wednesday, January 02, 2008 4:06 PM
> Subject: RE: Datanode Problem
> 
> 
> I Just uncommented and changed the JAVA_HOME, that's all I did in
> hadoop-env.sh.
> Do I need to configure anything else.
> 
> Here is the hadoop-env.sh
> 
> 
> # Set Hadoop-specific environment variables here.
> 
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
> 
> # The java implementation to use.  Required.
>  export JAVA_HOME=/usr/local/GridComputing/software/jdk1.5.0_04
> 
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
> 
> # The maximum amount of heap to use, in MB. Default is 1000.
> # export HADOOP_HEAPSIZE=2000
> 
> # Extra Java runtime options.  Empty by default.
> # export HADOOP_OPTS=-server
> 
> # Extra ssh options.  Empty by default.
> # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
> 
> # Where log files are stored.  $HADOOP_HOME/logs by default.
> # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> 
> # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
> # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> 
> # host:path where hadoop code should be rsync'd from.  Unset by default.
> # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> 
> # Seconds to sleep between slave commands.  Unset by default.  This
> # can be useful in large clusters, where, e.g., slave rsyncs can
> # otherwise arrive faster than the master can service them.
> # export HADOOP_SLAVE_SLEEP=0.1
> 
> # The directory where pid files are stored. /tmp by default.
> # export HADOOP_PID_DIR=/var/hadoop/pids
> 
> # A string representing this instance of hadoop. $USER by default.
> # export HADOOP_IDENT_STRING=$USER
> 
> # The scheduling priority for daemon processes.  See 'man nice'.
> # export HADOOP_NICENESS=10
> 
> -----Original Message-----
> From: Ted Dunning [mailto:tdunning-GzNPid7y/[EMAIL PROTECTED]
> Sent: Wednesday, January 02, 2008 5:02 PM
> To: hadoop-user-PPu3vs9EauNd/SJB6HiN2Ni2O/[EMAIL PROTECTED]
> Subject: Re: Datanode Problem
> 
> 
> Well, you have something very strange going on in your scripts.  Have you
> looked at hadoop-env.sh?
> 
> 
> On 1/2/08 1:58 PM, "Natarajan, Senthil"
> <[EMAIL PROTECTED]> wrote:
> 
>>> /bin/bash: /root/.bashrc: Permission denied
>>> localhost: ssh: localhost: Name or service not known
>>> /bin/bash: /root/.bashrc: Permission denied
>>> localhost: ssh: localhost: Name or service not known
> 
> 
> 
> 

Reply via email to