[
https://issues.apache.org/jira/browse/HDFS-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16568438#comment-16568438
]
Íñigo Goiri commented on HDFS-13088:
------------------------------------
I think that adding overreplication as a default of 0 to setReplication makes
sense.
However, I'm in between changing the method or adding an extra one with the new
parameter.
The current approach in [^HDFS-13088.001.patch] adds the parameter and changes
all the other tests.
Maybe we should add a new method instead of changing the existing one.
> Allow HDFS files/blocks to be over-replicated.
> ----------------------------------------------
>
> Key: HDFS-13088
> URL: https://issues.apache.org/jira/browse/HDFS-13088
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Virajith Jalaparti
> Assignee: Virajith Jalaparti
> Priority: Major
> Attachments: HDFS-13088.001.patch
>
>
> This JIRA is to add a per-file "over-replication" factor to HDFS. As
> mentioned in HDFS-13069, the over-replication factor will be the excess
> replicas that will be allowed to exist for a file or block. This is
> beneficial if the application deems additional replicas for a file are
> needed. In the case of HDFS-13069, it would allow copies of data in PROVIDED
> storage to be cached locally in HDFS in a read-through manner.
> The Namenode will not proactively meet the over-replication i.e., it does not
> schedule replications if the number of replicas for a block is less than
> (replication factor + over-replication factor) as long as they are more than
> the replication factor of the file.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]