On Apr 30, 2012, at 11:10 , Andrzej Bialecki wrote: > On 30/04/2012 19:48, Keith Wiley wrote: >> >> (1) Any idea what the heck is going on here, how this happened, what it >> means? > > The default hdfs config puts the namenode data in /tmp. This may be ok for > casual testing, but in all other situations it's the worst location > imaginable - for example, linux cleans this directory on reboot, and I think > that's what happened here. Your HDFS data is gone to a better world... > >> >> (2) Is there any way to recover without starting over from scratch? > > Regretfully, no. The lesson is: don't put precious files in /tmp.
Ah, okay, so, when setting up a single-machine, just a pseudo-dist cluster, what is a better way to do it? Where would one put the temp directories in order to gain improved robustness of the hadoop system? Is this the sort of thing to put in a home directory? I never really conceptualized it that way; I always thought HDFS and hadoop in general were sort of system-level concepts. This is a single-user machine, I have full root/admin control over it, so it's not a permissions issue, I'm just asking at a philosophical level how to set up a pseudo-dist cluster in the most effective way? Thanks. ________________________________________________________________________________ Keith Wiley kwi...@keithwiley.com keithwiley.com music.keithwiley.com "I used to be with it, but then they changed what it was. Now, what I'm with isn't it, and what's it seems weird and scary to me." -- Abe (Grandpa) Simpson ________________________________________________________________________________