[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-3702:
--------------------------------
    Attachment: HDFS-3702_Design.pdf

Hey, [~arpitagarwal] . Here is the design doc.

The basic idea is very simple: {{BlockPlacementPolicy}} will try to allocate 
replicas by putting local node into excluded node first, if not able to obtain 
sufficient replicas, fall back to normal way.  The rest of this patch is 
changing the signatures of the related functions. 

Would it be OK? Thanks.

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3702
>                 URL: https://issues.apache.org/jira/browse/HDFS-3702
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 2.5.1
>            Reporter: Nicolas Liochon
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to