By the way, something that I forgot to ask before:

Will the append implementation be as fast, as the write/copy functions?

Regards.

2009/9/23 Stas Oskin <[email protected]>

> Thanks, I exactly wanted to ask this for our R&D roadmap.
>
> 2009/9/23 Aaron Kimball <[email protected]>
>
> Or maybe more pessimistically, the second "stable" append implementation.
>>
>> It's not like HADOOP-1700 wasn't intended to work. It was just found not
>> to
>> after the fact. Hopefully this reimplementation will succeed. If you're
>> running a cluster that contains mission-critical data that cannot tolerate
>> corruption or loss, you shouldn't jump on the new-feature bandwagon until
>> it's had time to prove itself in the wild.
>>
>> But yes, we hope that appends will really-truly work in 0.21.
>> Experimental/R&D projects should be able to plan on having a working
>> append
>> function in 0.21.
>>
>> - Aaron
>>
>> On Sun, Sep 20, 2009 at 3:58 PM, Stas Oskin <[email protected]> wrote:
>>
>> > Hi.
>> >
>> > Just to understand the road-map, 0.21 will be the first stable "append"
>> > implementation?
>> >
>> > Regards.
>> >
>> > 2009/9/20 Owen O'Malley <[email protected]>
>> >
>> > >
>> > > On Sep 13, 2009, at 3:08 AM, Stas Oskin wrote:
>> > >
>> > >  Hi.
>> > >>
>> > >> Any idea when the "append" functionality is expected?
>> > >>
>> > >
>> > > A working append is a blocker on HDFS 0.21.0.
>> > >
>> > > The code for append is expected to be complete in a few weeks.
>> Meanwhile,
>> > > the rest of Common, HDFS, and MapReduce have feature-frozen and need
>> to
>> > be
>> > > stabilized and all of the critical bugs fixed. I'd expect the first
>> > releases
>> > > of 0.21.0 in early November.
>> > >
>> > > -- Owen
>> > >
>> >
>>
>

Reply via email to