Hi Wang,

> hadoop dfsadmin -report

returns nothing at all.

I have tried:
> hadoop dfsadmin -fs hdfs://hadoopmaster/ -safemode leave

10/01/14 14:11:24 INFO ipc.Client: Retrying connect to server: hadoopmaster/
137.195.143.132:8020. Already tried 0 time(s).
10/01/14 14:11:25 INFO ipc.Client: Retrying connect to server: hadoopmaster/
137.195.143.132:8020. Already tried 1 time(s).
etc......

??

thanks..



2010/1/14 Wang Xu <[email protected]>

> Hi Rob,
>
> What do you get if you use
>   hadoop dfsadmin  -report
> ?
>
> And you can specify the fs in command line when you call dfsadmin.
>
> On Thu, Jan 14, 2010 at 9:41 PM, Rob Stewart
> <[email protected]> wrote:
> > Hi,
> >
> > I'm having a slight issue with my Hadoop cluster. There are 32 nodes. I
> > have:
> > /usr/lib/hadoop/bin/stop-mapred.sh
> > /usr/lib/hadoop/bin/stop-dfs.sh
> > /usr/lib/hadoop/bin/start-dfs.sh
> > /usr/lib/hadoop/bin/start-mapred.sh
> >
> > All worked perfectly, no errors.
> >
> > I try and remove a file: hadoop dfs -rmr wordCountOutput.jaql
> >
> > This returns: Cannot delete [the file]. Name node is in safe mode.
> >
> > I tried to exit safemode manually. I get:
> >> hadoop dfsadmin -safemode leave
> > FileSystem is file:///
> >
> >  The code generating this is:
> >
> >
> >  public void setSafeMode(String[] argv, int idx) throws IOException {
> >     final String safeModeUsage = "Usage: java DFSAdmin -safemode "
> >                                  + "[enter | leave | get]";
> >    if (!(fs instanceof DistributedFileSystem)) {
> >      System.out.println("FileSystem is " + fs.getName());
> >      return;
> >    }
> >
> >
> > What seems to be the issue? the HDFS is running, and I can browse the
> > filesystem, both via the command line and the Hadoop web interface.
> >
> >
> > Thanks...
> >
>
>
>
> --
> Wang Xu
> Stephen Leacock  - "I detest life-insurance agents: they always argue
> that I shall some day die, which is not so." -
> http://www.brainyquote.com/quotes/authors/s/stephen_leacock.html
>

Reply via email to