Hi Andrew,

By using the -setrep command, can we change the replication factor of
existing files? Or, can we change the replication factor of files
dynamically? If that's possible, how much data movement overhead will
occur?
Thanks!

Lipeng

On Tue, Mar 3, 2015 at 2:57 PM, Andrew Wang <andrew.w...@cloudera.com> wrote:
> Yup, definitely. Check out the -setrep command:
>
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep
>
> HTH,
> Andrew
>
> On Tue, Mar 3, 2015 at 11:49 AM, Lipeng Wan <lipengwa...@gmail.com> wrote:
>
>> Hi Andrew,
>>
>> Thanks for your reply!
>> Then is it possible for us to specify different replication factors
>> for different files?
>>
>> Lipeng
>>
>> On Tue, Mar 3, 2015 at 2:38 PM, Andrew Wang <andrew.w...@cloudera.com>
>> wrote:
>> > Hi Lipeng,
>> >
>> > Right now that is unsupported, replication is set on a per-file basis,
>> not
>> > per-block.
>> >
>> > Andrew
>> >
>> > On Tue, Mar 3, 2015 at 11:23 AM, Lipeng Wan <lipengwa...@gmail.com>
>> wrote:
>> >
>> >> Hi devs,
>> >>
>> >> By default, hdfs creates same number of replicas for each block. Is it
>> >> possible for us to create more replicas for some of the blocks?
>> >> Thanks!
>> >>
>> >> L. W.
>> >>
>>

Reply via email to