[
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15214139#comment-15214139
]
Tsz Wo Nicholas Sze commented on HDFS-3702:
-------------------------------------------
- Please use new AddBlockFlag.NO_LOCAL_WRITE and add a new create method
DistributedFileSystem but not adding CreateFlag.NO_LOCAL_WRITE.
- Please remove the "Fallback to use the default block placement." debug
message since it is not generally true -- it may not be a fallback case.
- When avoidLocalNode == false, the results array is initialized twice, new
ArrayList<>(chosenStorage) is called twice. We should only create the new
array once in this case
Thanks..
> Add an option for NOT writing the blocks locally if there is a datanode on
> the same box as the client
> -----------------------------------------------------------------------------------------------------
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Affects Versions: 2.5.1
> Reporter: Nicolas Liochon
> Assignee: Lei (Eddy) Xu
> Priority: Minor
> Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch,
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch,
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch,
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702.010.patch,
> HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that
> wrote them (the 'HBase regionserver') dies. This will likely come from a
> hardware failure, hence the corresponding datanode will be dead as well. So
> we're writing 3 replicas, but in reality only 2 of them are really useful.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)