I was thinking of checking for both independently, and taking a logical OR.
Would that be sufficient?

I'm trying to avoid file reading if possible. Not that reading through a log
is that intensive,
but it'd be cleaner if I could poll either Hadoop itself or inspect the
processes running.

On Fri, Jun 27, 2008 at 1:23 PM, Miles Osborne <[EMAIL PROTECTED]> wrote:

> that won't work since the namenode may be down, but the secondary namenode
> may be up instead
>
> why not instead just look at the respective logs?
>
> Miles
>
> 2008/6/27 Meng Mao <[EMAIL PROTECTED]>:
>
> > Is running:
> > ps aux | grep [\\.]NameNode
> >
> > and looking for a non empty response a good way to test HDFS up status?
> >
> > I'm assuming that if the NameNode process is down, then DFS is definitely
> > down?
> > Worried that there'd be frequent cases of DFS being messed up but the
> > process still running just fine.
> >
> > On Fri, Jun 27, 2008 at 10:48 AM, Meng Mao <[EMAIL PROTECTED]> wrote:
> >
> > > For a Nagios script I'm writing, I'd like a command-line method that
> > checks
> > > if HDFS is up and running.
> > > Is there a better way than to attempt a hadoop dfs command and check
> the
> > > error code?
> > >
> >
> >
> >
> > --
> > hustlin, hustlin, everyday I'm hustlin
> >
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in Scotland,
> with registration number SC005336.
>



-- 
hustlin, hustlin, everyday I'm hustlin

Reply via email to