Here's an error I've never seen before.  I rebooted my machine sometime last 
week, so obviously when I tried to run a hadoop job this morning, the first 
thing I was quickly reminded of was that the pseudo-distributed cluster wasn't 
running.  I started it up only to watch the job tracker appear in the browser 
briefly and then go away (typical error complaining that the port was closed, 
as if the jobtracker is gone).  The namenode, interestingly, never came up 
during this time.  I tried stopping and starting "all" a few times but to no 
avail.

I inspected the logs and saw this:

java.io.IOException: Missing directory /tmp/hadoop-keithw/dfs/name

Sure enough, it isn't there.  I'm not familiar with this directory, so I can't 
say whether it was ever there before, but presumably it was.

Now, I assume I could get around this by formatting a new namenode, but then I 
would have to copy my data back into HDFS from scratch.

So, two questions:

(1) Any idea what the heck is going on here, how this happened, what it means?

(2) Is there any way to recover without starting over from scratch?

Thanks.

________________________________________________________________________________
Keith Wiley     kwi...@keithwiley.com     keithwiley.com    music.keithwiley.com

"And what if we picked the wrong religion?  Every week, we're just making God
madder and madder!"
                                           --  Homer Simpson
________________________________________________________________________________

Reply via email to