Hi.

Any idea about having replication value at 2?

Was this fixed in the patches for 0.18.3, and if yes, which patch is this?

Thanks.

On Thu, Aug 27, 2009 at 8:18 PM, Stas Oskin <[email protected]> wrote:

> Hi.
>
> Following on this issue, any idea if all the bugs were worked out in 0.20,
> with replication value of 2?
>
> I remember 0.18.3 had some issues with this, and actually caused a lost of
> data to some uni.
>
> Regards.
>
> 2009/8/27 Alex Loddengaard <[email protected]>
>
> I don't know for sure, but running the rebalancer might do this for you.
>>
>> <
>>
>> http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#Rebalancer
>> >
>>
>> Alex
>>
>> On Thu, Aug 27, 2009 at 9:18 AM, Michael Thomas <[email protected]
>> >wrote:
>>
>> > dfs.replication is only used by the client at the time the files are
>> > written.  Changing this setting will not automatically change the
>> > replication level on existing files.  To do that, you need to use the
>> > hadoop cli:
>> >
>> > hadoop fs -setrep -R 1 /
>> >
>> > --Mike
>> >
>> >
>> > Vladimir Klimontovich wrote:
>> > > This will happen automatically.
>> > > On Aug 27, 2009, at 6:04 PM, Andy Liu wrote:
>> > >
>> > >> I'm running a test Hadoop cluster, which had a dfs.replication value
>> > >> of 3.
>> > >> I'm now running out of disk space, so I've reduced dfs.replication to
>> > >> 1 and
>> > >> restarted my datanodes.  Is there a way to free up the
>> over-replicated
>> > >> blocks, or does this happen automatically at some point?
>> > >>
>> > >> Thanks,
>> > >> Andy
>> > >
>> > > ---
>> > > Vladimir Klimontovich,
>> > > skype: klimontovich
>> > > GoogleTalk/Jabber: [email protected]
>> > > Cell phone: +7926 890 2349
>> > >
>> >
>> >
>>
>

Reply via email to