[ 
https://issues.apache.org/jira/browse/HADOOP-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12709021#action_12709021
 ] 

Konstantin Shvachko commented on HADOOP-3799:
---------------------------------------------

> The plugin wil analyze a set of past access patterns (stored in an external 
> db)

What external db? Dhruba, could you please elaborate what you are trying to do. 
May be in a form of some document.
Mixing different placement policies in the same (instance of) file system is 
not a good idea imo. The policies may contradict each other.
I would rather allow to format a file system with a specific policy and then 
keep it constant for the lifespan of the system.
This gives enough space for experimenting with different policies.
But I agree that the balancer and the fsck should be policy aware.

> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>
>                 Key: HADOOP-3799
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3799
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: BlockPlacementPluggable.txt
>
>
> The current HDFS code typically places one replica on local rack, the second 
> replica on remote random rack and the third replica on a random node of that 
> remote rack. This algorithm is baked in the NameNode's code. It would be nice 
> to make the block placement algorithm a pluggable interface. This will allow 
> experimentation of different placement algorithms based on workloads, 
> availability guarantees and failure models.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to