>>I dont know how to simulate a disk failure.. 

Couple of things you could do. chmod 000 is one.
1. umount -l
2. mount ro only
3. If machine has hot swappable disks, pull out a disk.

-Bharath



________________________________
From: ccxixicc <ccxix...@foxmail.com>
To: hdfs-user <hdfs-user@hadoop.apache.org>
Sent: Wednesday, May 25, 2011 1:07 AM
Subject: Re: What if one of the directory(dfs.name.dir) rw error ?




I'm using 0.20.2. 

I had some test. I dont know how to simulate a disk failure, just chmod 000 
dir1, the namenode shutdown immediately. And NN will hang if the nfs server 
down.

 
 
------------------ Original ------------------
From:  "Harsh J"<ha...@cloudera.com>;
Date:  Wed, May 25, 2011 03:49 PM
To:  "hdfs-user"<hdfs-user@hadoop.apache.org>; 
Subject:  Re: What if one of the directory(dfs.name.dir) rw error ?
 
Yes. But depending on the version you're using, you may have to
manually restart the NN after fixing the mount points, to get the
directories in action again.

2011/5/25 ccxixicc <ccxix...@foxmail.com>:
>
> Hi,all
> I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
> /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
> What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
> What happens if nfs server down, so dir3 cannot be read and write?
> Will hadoop ignore the bad directory and use the good directory and continue
> server?
> Thanks.
>



-- 
Harsh J

Reply via email to