Sandy -

I suggest you take a look into your NameNode and DataNode logs.  From the
information posted, these likely would be at

/Users/hadoop/hadoop-0.18.2/bin/../logs/hadoop-hadoop-namenode-loteria.cs.tamu.edu.log
/Users/hadoop/hadoop-0.18.2/bin/../logs/hadoop-hadoop-jobtracker-loteria.cs.tamu.edu.log

If the cause isn't obvious from what you see there, could you please post
the last few lines from each log?

-jw

On Fri, Feb 13, 2009 at 3:28 PM, Sandy <snickerdoodl...@gmail.com> wrote:

> Hello,
>
> I would really appreciate any help I can get on this! I've suddenly ran
> into
> a very strange error.
>
> when I do:
> bin/start-all
> I get:
> hadoop$ bin/start-all.sh
> starting namenode, logging to
>
> /Users/hadoop/hadoop-0.18.2/bin/../logs/hadoop-hadoop-namenode-loteria.cs.tamu.edu.out
> starting jobtracker, logging to
>
> /Users/hadoop/hadoop-0.18.2/bin/../logs/hadoop-hadoop-jobtracker-loteria.cs.tamu.edu.out
>
> No datanode, secondary namenode or jobtracker are being started.
>
> When I try to upload anything on the dfs, I get a "node in safemode" error
> (even after waiting 5 minutes), presumably because it's trying to reach a
> datanode that does not exist.  The same "safemode" error occurs when I try
> to run jobs.
>
> I have tried bin/stop-all and then bin/start-all again. I get the same
> problem!
>
> This is incredibly strange, since I was previously able to start and run
> jobs without any issue using this version on this machine. I am running
> jobs
> on a single Mac Pro running OS X 10.5
>
> I have tried updating to hadoop-0.19.0, and I get the same problem. I have
> even tried this using previous versions, and I'm getting the same problem!
>
> Anyone have any idea why this suddenly could be happening? What am I doing
> wrong?
>
> For convenience, I'm including portions of both conf/hadoop-env.sh and
> conf/hadoop-site.xml:
>
> --- hadoop-env.sh ---
>  # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
>  export
> JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
> # The maximum amount of heap to use, in MB. Default is 1000.
>  export HADOOP_HEAPSIZE=3000
> ...
> --- hadoop-site.xml ---
> <configuration>
>
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/Users/hadoop/hadoop-0.18.2/hadoop-${user.name}</value>
>  <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://localhost:9000</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
> </property>
>
> <property>
>  <name>mapred.job.tracker</name>
>  <value>localhost:9001</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
> </property>
>
> <property>
> <name>mapred.tasktracker.tasks.maximum</name>
> <value>1</value>
> <description>The maximum number of tasks that will be run simultaneously by
> a
> a task tracker
> </description>
> </property>
> ...
>

Reply via email to