[ 
https://issues.apache.org/jira/browse/HDFS-7035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7035:
--------------------------------
    Attachment: HDFS-7035.004.patch

Make addVolume() become an atomic operation. The volume metadata in 
{{DataStorage}} and {{FsDataset}} is first load into a local copy. After all 
I/O finishes, if there is nothing failed, then the {{DataNode}} commits the 
loaded volume metadata to {{DataStorage}} and {{FsDataset}} respectively. 
Therefore, if there is any error happened during loading a volume, the metadata 
belonging to this volume will not be visible to the service.

Also it captures the error message for {{IOExceptions}} in 
{{DataStorage#removeVolumes()}}.

> Refactor DataStorage and BlockSlicePoolStorage 
> -----------------------------------------------
>
>                 Key: HDFS-7035
>                 URL: https://issues.apache.org/jira/browse/HDFS-7035
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: 2.5.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>         Attachments: HDFS-7035.000.combo.patch, HDFS-7035.000.patch, 
> HDFS-7035.001.combo.patch, HDFS-7035.001.patch, HDFS-7035.002.patch, 
> HDFS-7035.003.patch, HDFS-7035.003.patch, HDFS-7035.004.patch
>
>
> {{DataStorage}} and {{BlockPoolSliceStorage}} share many similar code path. 
> This jira extracts the common part of these two classes to simplify the logic 
> for both.
> This is the ground work for handling partial failures during hot swapping 
> volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to