Hi Harsh,

thanks for reply. actually the problem was that hadoop was not started
correctly but the port was bound. and after making corrections, as the port
was not available, i was getting problems.
 Solved now

Thanks and Regards

Vaibhav Negi


On Mon, Aug 30, 2010 at 1:04 PM, Harsh J <[email protected]> wrote:

> Maybe the user who issues stop-all.sh does not have permissions to
> terminate the process of NN (and some other, depending on who/what
> started it). Check jps listing after stopping and with some ps/top
> checks, switch to the proper user and issue a stop-all again?
>
> You can also issue it a SIGTERM I believe.
>
> On Mon, Aug 30, 2010 at 12:22 PM, vaibhav negi <[email protected]>
> wrote:
> > Hi ,
> >
> > I am running hadoop 0.20.2 . with 2 node cluster.
> > I executed script stop-all.sh . But still 2 line logs are getting created
> > every hour in log directory of name node log directory.
> > How to completely shutdown hadoop cluster.
> > Below is the 1 line log.
> >
> >
> > 2010-08-29 00:30:00,018 INFO org.apache.hadoop.hdfs.server.
> > namenode.FSNamesystem.audit: ugi=root,root,bin,daemon,sys,adm,disk,wheel
> > ip=/10.0.8.47        cmd=listStatus  src=/user       dst=null
> > perm=null
> >
> >
> > Vaibhav Negi
> >
>
>
>
> --
> Harsh J
> www.harshj.com
>

Reply via email to