[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-3702:
--------------------------------
    Attachment: HDFS-3702.011.patch

Hey, [~szetszwo] 

Thanks a lot for your good suggestions. 

bq. Please remove the "Fallback to use the default block placement." debug 
message 

Done

bq. When avoidLocalNode == false, ... We should only create the new array once 
in this case.

Done


bq. Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method 
DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE.

Should we agree that {{AddBlockFlag}} is an internal flag used within HDFS? In 
the meantime, {{CreateFlag.NO_LOCAL_WRITE}}  is very similar to 
{{CreateFlag.LAZY_PERSIST}} in the way that

*  Both of them are hint for block placement. The actual file system implement 
can choose to support it or ignore it.  Both flags are not necessary to be HDFS 
specific, i.e., some other distributed file systems (Lustre, or even Tachyon) 
can support this as well. 

Thanks!

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3702
>                 URL: https://issues.apache.org/jira/browse/HDFS-3702
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 2.5.1
>            Reporter: Nicolas Liochon
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702.010.patch, 
> HDFS-3702.011.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to