This means that the namenode is in safe mode, see description here
http://lucene.apache.org/hadoop/docs/api/org/apache/hadoop/dfs/NameNode.html#setSafeMode(org.apache.hadoop.dfs.FSConstants.SafeModeAction)
Start-all.sh starts JobTracker before the namenodes leaves safe mode. At
startup JobTracker is trying to remove
temporary directories from DFS (not from your local fs), and gets this
exception.
I'd recommend to first use start-dfs.sh, then using namenode web UI
verify everything is running and safe mode is off.
Then run start-mapred.sh.
If you start the cluster after re-formatting safe mode is not entered
since dfs is empty.
Konstantin
Grant Ingersoll wrote:
Hi,
What does this mean:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.SafeModeException: Cannot delete /tmp/hadoop-
<my user name>/mapred/system. Name node is in safe mode.
Safe mode will be turned off automatically.
at org.apache.hadoop.dfs.FSNamesystem.delete(FSNamesystem.java:761)
at org.apache.hadoop.dfs.NameNode.delete(NameNode.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:385)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:514)
The directory (hadoop-<my user name>) doesn't even exist on either of
my machines that I am using. I am getting this after calling start-
all.sh on my machine. My slaves file only has localhost.
Does this mean my FS has been corrupted? Looking at the code, it is
trying to delete the NameNode. If I run ./hadoop namenode -format
everything then works. The only thing I can think of that might be
related is that one of my worker nodes went to sleep while the server
thread was still running (idle) overnight.
Thanks,
Grant