Did you run a map reduce job?

I think the default replication factor on job files is 10, which
obviously doesn't work well on a psuedo-distributed cluster.

-Joey

On Wed, May 18, 2011 at 5:07 PM, Steve Cohen <mail4st...@gmail.com> wrote:
> Thanks for the answer. Earlier, I asked about why I get occasional not 
> replicated yet errors. Now, I had dfs.replication set to one. What 
> replication could it have been doing? Did the error messages actually mean 
> that the file couldn't get created in the cluster?
>
> Thanks,
> Steve Cohen
>
>
>
> On May 18, 2011, at 6:39 PM, Todd Lipcon <t...@cloudera.com> wrote:
>
>> Tried to send this, but apparently SpamAssassin finds emails about
>> "replicas" to be spammy. This time with less rich text :)
>>
>> On Wed, May 18, 2011 at 3:35 PM, Todd Lipcon <t...@cloudera.com> wrote:
>>>
>>> Hi Steve,
>>> Running setrep will indeed change those files. Changing "dfs.replication" 
>>> just changes the default replication value for files created in the future. 
>>> Replication level is a file-specific property.
>>> Thanks
>>> -Todd
>>>
>>> On Wed, May 18, 2011 at 3:32 PM, Steve Cohen <mail4st...@gmail.com> wrote:
>>>>
>>>> Say I add a datanode to a pseudo cluster and I want to change the
>>>> replication factor to 2. I see that I can either run hadoop fs -setrep
>>>> or change the hdfs-site.xml value for dfs.replication. But do either
>>>> of these cause the existing blocks to replicate?
>>>>
>>>> Thanks,
>>>> Steve Cohen
>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

Reply via email to