Hi,all

I set dfs.name.dir to a comma-delimited list of directories, dir1 is in 
/dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?

What happens if nfs server down, so dir3 cannot be read and write?
Will hadoop ignore the bad directory and use the good directory and continue 
server?

Thanks.

Reply via email to