I did this
prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
-safemode leave
Safe mode is OFF
prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
-safemode get
Safe mode is OFF
and then I restarted my cluster and still I see the INFO in namenode logs
saying in safemode..

somehow I am getting my Map output fine, but the job.isSuccessful() is
returning false.

Any help on that.

Thanks
+ Praveen

On Thu, Dec 9, 2010 at 9:28 PM, Mahadev Konar <[email protected]> wrote:

> Hi Praveen,
>  Looks like its your namenode that's still in safemode.
>
>
> http://wiki.apache.org/hadoop/FAQ
>
> The safemode feature in the namenode waits till a certain number of
> threshold for hdfs blocks have been reported by the datanodes,  before
> letting clients making edits to the namespace. It usually happens when you
> reboot your namenode. You can read more about the safemode in the above FAQ.
>
> Thanks
> mahadev
>
>
> On 12/9/10 6:09 PM, "Praveen Bathala" <[email protected]> wrote:
>
> Hi,
>
> I am running Mapreduce job to get some emails out of a huge text file.
> I used to use hadoop 0.19 version and I had no issues, now I am using the
> hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as job
> failed and in the jobtracker log
>
> Can someone please help me..
>
> 2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker: problem
> cleaning system directory:
> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
> Safe mode will be turned off automatically.
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>  at org.apache.hadoop.ipc.Client.call(Client.java:740)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy4.delete(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy4.delete(Unknown Source)
>        at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:582)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1695)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
> 2010-12-09 20:53:10,405 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2010-12-09 20:53:10,409 INFO org.apache.hadoop.mapred.JobTracker: problem
> cleaning system directory:
> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
>
>
> Thanks in advance
> + Praveen
>
>


-- 
+ Praveen

Reply via email to