data node process should not die if one dir goes bad
----------------------------------------------------

                 Key: HADOOP-4480
                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
             Project: Hadoop Core
          Issue Type: Bug
          Components: fs
    Affects Versions: 0.18.1
            Reporter: Allen Wittenauer


When multiple directories are configured for the data node process to use to 
store blocks, it currently exits when one of them is not writable.   Instead, 
it should either completely ignore that directory or attempt to continue 
reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to