[
https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732269#action_12732269
]
dhruba borthakur commented on HDFS-385:
---------------------------------------
The cookie approach has a major disadvantage. Let's assume that the cookie is
implemented as a pathname for hadoop 0.21. Then somebody builds an app using
this api that parses the cookie as a hdfs pathname. For hadoop 0.22, we change
the cookie to be a fileid. The earlier application compiles against the hadoop
0.22 release, but at runtime the app will fail because it is unable to parse
the cookie as a pathname. Instead, if we change the API signature for hadoop
0.22 to reflect that the pathname is not available anymore (instead a fileid is
avilable), then the app will fail at compile time itself, which might be better
than failing at runtime. No?
> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>
> Key: HDFS-385
> URL: https://issues.apache.org/jira/browse/HDFS-385
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Fix For: 0.21.0
>
> Attachments: BlockPlacementPluggable.txt,
> BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt,
> BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt,
> BlockPlacementPluggable5.txt
>
>
> The current HDFS code typically places one replica on local rack, the second
> replica on remote random rack and the third replica on a random node of that
> remote rack. This algorithm is baked in the NameNode's code. It would be nice
> to make the block placement algorithm a pluggable interface. This will allow
> experimentation of different placement algorithms based on workloads,
> availability guarantees and failure models.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.