Hi Erik and Ryon,

Thanks for your reply. This again proved how important  fault-torrent is. It
seems that I have to code a bit to see if I can extract the data out.

Best,
Arber

On Tue, May 26, 2009 at 3:32 AM, Ryan Rawson <[email protected]> wrote:

> Maybe stack will chime in here with a potential recovery mechanism, but
> Erik
> is correct.  What has happened is the metadata which indicates which tables
> exist, and what their ranges are has disappeared.  Right now there is no
> easy way to recover back to the original because of the missing metadata is
> not stored elsewhere. What you can try in the mean time is directly
> accessing the mapfiles using the raw mapfile reader (i think the class is
> Mapfile...) - you'd be able to get the data out, then you could re-insert
> it
> back into a running instance later.
>
> Needless to say, one should not trust a 1-node cluster to irreplacable
> data.   Until certain HDFS bugs are resolved, which are slated for a hbase
> 0.20 timeline, there is always a data loss hole.  Good news is there might
> be a backported HDFS 0.19 patch, but that may not be relevant since HBase
> 0.20 is based on Hadoop/HDFS 0.20.
>
> good luck...
>
> -ryan
>
> On Mon, May 25, 2009 at 12:24 PM, Erik Holstad <[email protected]
> >wrote:
>
> > Hey Arber!
> > What it sounds like to me is that the table Meta hadn't been flushed to
> > disk
> > and was inly sitting on memory, so
> > when the machine went down that data got lost.
> >
> > Regards Erik
> >
>

Reply via email to