[ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14054008#comment-14054008
 ] 

Colin Patrick McCabe commented on HDFS-6482:
--------------------------------------------

The new "current" directory will contain hardlinks to the block and metadata 
files in the "previous" directory.  So it seems like rollback should work fine 
in this case.

It would be nice to add a unit test where we upgrade a DataNode from the 
non-blockid-based version to a blockid-based version, and then do a rollback.  
Can you add this, James?  Since you already added 
{{hadoop-24-datanode-dir.tgz}}, it shouldn't be too difficult to add a unit 
test that rolls back to this version from the new version.

> Use block ID-based block layout on datanodes
> --------------------------------------------
>
>                 Key: HDFS-6482
>                 URL: https://issues.apache.org/jira/browse/HDFS-6482
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.5.0
>            Reporter: James Thomas
>            Assignee: James Thomas
>         Attachments: HDFS-6482.1.patch, HDFS-6482.2.patch, HDFS-6482.3.patch, 
> HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, HDFS-6482.7.patch, 
> HDFS-6482.patch
>
>
> Right now blocks are placed into directories that are split into many 
> subdirectories when capacity is reached. Instead we can use a block's ID to 
> determine the path it should go in. This eliminates the need for the LDir 
> data structure that facilitates the splitting of directories when they reach 
> capacity as well as fields in ReplicaInfo that keep track of a replica's 
> location.
> An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to