To make sure your dfs.namenode.name.dir is by default.
then, how did you find /user exists? hdfs dfs -ls ? or you checked
dfs.datanode.data.dir?
 if the latter, then don't worry.


On Wed, Oct 8, 2014 at 11:56 AM, Tianyin Xu <t...@cs.ucsd.edu> wrote:

> Hi,
>
> I wanna run some experiments on Hadoop which requires a clean, initial
> system state of HDFS for every job execution, i.e., the HDFS should be
> formatted and contain nothing.
>
> I keep *dfs.datanode.data.dir* and *dfs.namenode.name.dir* the default,
> which are located in /tmp
>
> Every time before running a job,
>
> 1. I first delete  dfs.datanode.data.dir and dfs.namenode.name.dir
> #rm -Rf /tmp/hadoop-tianyin*
>
> 2. Then I format the nameNode
> #bin/hdfs namenode -format
>
> 3. Start HDFS
> sbin/start-dfs.sh
>
> 4. However, I still find the previous metadata (e.g., the directory I
> previously created) in HDFS, for example,
> #bin/hdfs dfs -mkdir /user
> mkdir: `/user': File exists
>
> Could anyone tell me what I missed or misunderstood? Why I can still see
> the old data after both physically delete the directories and reformat the
> HDFS nameNode?
>
> Thanks a lot for your help!
> Tianyin
>

Reply via email to