Ah, I do recall reading that somewhere now.  Thank you very much.

Keith

On Sun, Nov 6, 2011 at 8:21 PM, Harsh J <[email protected]> wrote:

> Keith,
>
> This is cause your dfs.data.dir and dfs.name.dir are, by default, on
> /tmp. When your /tmp is cleared by the OS (a regular thing people
> forget to think of), your HDFS is essentially wiped away.
>
> Configure dfs.name.dir and dfs.data.dir to be on a proper directory
> that isn't cleaned up periodically and/or at boot, and you'll have a
> proper HDFS across 'sessions'.
>
> On Mon, Nov 7, 2011 at 2:45 AM, Keith Thompson <[email protected]>
> wrote:
> > Hi,
> >
> > I am running Hadoop in pseudo-distributed mode on Linux.  For some
> reason,
> > I have to reformat the namenode every time I start up Hadoop because it
> > will fail whenever I try to connect to the HDFS.  After I reformat, it
> runs
> > fine for that session; however, if I try to run it again later it will
> have
> > the same issue.  There is probably some setting I forgot to set
> somewhere.
> > Can anyone help?
> >
> > --
> > *Keith Thompson*
> > Graduate Research Associate
> > SUNY Research Foundation
> > Dept. of Systems Science and Industrial Engineering
> > Binghamton University
> >
>
>
>
> --
> Harsh J
>



-- 
*Keith Thompson*
Graduate Research Associate, Xerox Corporation
SUNY Research Foundation
Dept. of Systems Science and Industrial Engineering
Binghamton University
work: 585-422-6587

Reply via email to