Seth,

You are talking about durability, not HA.

To have a better understanding I recommend reading our architecture
page http://wiki.apache.org/hadoop/Hbase/HbaseArchitecture and the
Bigtable paper.

In short, when you write a row it goes into the write-ahead-log and
then right after that in MemStore. Once the MemStore is full (64MB) or
for some other reasons, it is flushed to disk where the file is
replicated (transparently).

If the node fails, the Master will process the WAL so that you don't
lose rows in the MemStore. Prior to Hadoop 0.21 (unreleased), the
append feature is badly crippled so there's a chance that edits
written to the WAL may be lost because HDFS can't guarantee fs sync.

J-D

On Fri, Dec 11, 2009 at 10:20 AM, Seth Ladd <[email protected]> wrote:
> Aloha,
>
> We're currently investigating HBase (0.20.2) and are really enjoying
> the experience.  We're now curious how much High Availability we
> should expect.  Specifically, after we insert a row into HBase, when
> does it become HA?  That is, is it immediately shared across multiple
> nodes in the cluster?  I don't quite understand the relationship
> between a Region and its backing file in HDFS.
>
> Thanks for any tips or background you can provide.
>
> Seth
>

Reply via email to