Thanks Daniel

It means I have to checkout the code from branch and build it on my local
machine.

Gagan


On Thu, Aug 26, 2010 at 9:51 PM, Jean-Daniel Cryans <[email protected]>wrote:

> Then I would expect some form of dataloss yes, because stock hadoop
> 0.20 doesn't have any form of fsync so HBase doesn't know whether the
> data made it to the datanodes when appending to the WAL. Please use
> the 0.20-append hadoop branch with HBase 0.89 or cloudera's CDH3b2.
>
> J-D
>
> On Thu, Aug 26, 2010 at 7:22 AM, Gagandeep Singh
> <[email protected]> wrote:
> > HBase - 0.20.5
> > Hadoop - 0.20.2
> >
> > Thanks,
> > Gagan
> >
> >
> >
> > On Thu, Aug 26, 2010 at 7:11 PM, Jean-Daniel Cryans <[email protected]
> >wrote:
> >
> >> Hadoop and HBase version?
> >>
> >> J-D
> >>
> >> On Aug 26, 2010 5:36 AM, "Gagandeep Singh" <[email protected]>
> >> wrote:
> >>
> >> Hi Group,
> >>
> >> I am checking HBase/HDFS fail over. I am inserting 1M records from my
> HBase
> >> client application. I am clubbing my Put operation such that 10 records
> get
> >> added into the List<Put> and then I call the table.put(). I have not
> >> modified the default setting of Put operation which means all data is
> >> written in WAL and in case of server failure my data should not be lost.
> >>
> >> But I noticed somewhat strange behavior, while adding records if I kill
> my
> >> Region Server then my application waits till the time region data is
> moved
> >> to another region. But I noticed while doing so all my data is lost and
> my
> >> table is emptied.
> >>
> >> Could you help me understand the behavior. Is there some kind of Cache
> also
> >> involved while writing because of which my data is lost.
> >>
> >>
> >> Thanks,
> >> Gagan
> >>
> >
>

Reply via email to