You can simulate disk failure by some fault injection techniques.
Applying AspectJ is one of them.

On Wed, May 25, 2011 at 3:07 AM, ccxixicc <ccxix...@foxmail.com> wrote:

>
> I'm using 0.20.2.
> I had some test. I dont know how to simulate a disk failure, just chmod 000
> dir1, the namenode shutdown immediately. And NN will hang if the nfs server
> down.
>
>
>
> ------------------ Original ------------------
> *From: * "Harsh J"<ha...@cloudera.com>;
> *Date: * Wed, May 25, 2011 03:49 PM
> *To: * "hdfs-user"<hdfs-user@hadoop.apache.org>;
> *Subject: * Re: What if one of the directory(dfs.name.dir) rw error ?
>
> Yes. But depending on the version you're using, you may have to
> manually restart the NN after fixing the mount points, to get the
> directories in action again.
>
> 2011/5/25 ccxixicc <ccxix...@foxmail.com>:
> >
> > Hi,all
> > I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
> > /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
> > What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
> > What happens if nfs server down, so dir3 cannot be read and write?
> > Will hadoop ignore the bad directory and use the good directory and
> continue
> > server?
> > Thanks.
> >
>
>
>
> --
> Harsh J
>
>

Reply via email to