Hi Lucas!
Not sure if you have had a look at the BigTable paper, link in the beginning
of http://hadoop.apache.org/hbase/ might clear some of the confusion.
But basically what happens is to support fast writes we only write to
memory  and periodically flush this data to disk, so while data is still in
memory
it is not persisted, needs to be written to disk/HDFS for that to be true.
We have a second mechanism for dealing with not losing data while sitting in
memory. This is called WriteAheadLog and we are still waiting for Hadoop to
support one of the features to make this happen, which hopefully will
not be too long.

Hope this helped.

Erik

Reply via email to