@ Nitesh , Jeff & Ed

Thanks guys !! It was a mistake in the configuration  file !! It works now !
..

8408 Jps
8109 DataNode
8370 TaskTracker
8204 SecondaryNameNode
8281 JobTracker


Except for " TaskTracker$Child " !!




On Mon, Nov 16, 2009 at 10:57 AM, Edward Capriolo <edlinuxg...@gmail.com>wrote:

> On Mon, Nov 16, 2009 at 9:57 AM, Jeff Zhang <zjf...@gmail.com> wrote:
> > look at the logs of job tracker, maybe you will get some clues.
> >
> >
> > Jeff Zhang
> >
> >
> >
> > On Mon, Nov 16, 2009 at 6:45 AM, Prabhu Hari Dhanapal <
> > dragonzsn...@gmail.com> wrote:
> >
> >> Hi all,
> >>
> >> I just installed Hadoop(single node cluster) and tried to start and stop
> >> the
> >> nodes , and it said
> >> no jobtracker to stop , no namenode to stop
> >>
> >> however the tutorial i used suggest that jobtracker and namenodes should
> >> also have started ? Why does this happen?
> >> am i missing something?
> >>
> >>
> >>
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29>
> <
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
> >
> >>
> >>
> >> had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ jps
> >> 20671 Jps
> >> 20368 DataNode
> >> 20463 SecondaryNameNode
> >>
> >> had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ ./stop-all.sh
> >> no jobtracker to stop
> >> localhost: no tasktracker to stop
> >> no namenode to stop
> >> localhost: stopping datanode
> >> localhost: stopping secondarynamenode
> >>
> >>
> >>
> >>
> >> --
> >> Hari
> >>
> >
>
>
> The issue here is that these resources failed to start. What happens
> here is as soon as the java process is started the system returns an
> ok status to the script. However the processes die moments later as
> they start up.
>
> For example if you start the namenode, script returns ok, namenode
> runs and realizes its dfs.name directory is not formatted. Then it
> stops.
>
> Generally after starting a hadoop process, tail the log it creates for
> a few seconds and make sure it REALLY starts up. Really the scripts
> should do more pre-startup checking, but the scripts could not test
> for every possible condition that could cause hadoop not to start.
>
> Also for long running deamons the pid files are written to /tmp see
> bin/hadoop-daemon.sh
> If something is cleaning /tmp stop arguments are unable to find the pid.
>
> That is shell scripting for you :)
> Edward
>



-- 
Hari

Reply via email to