See http://issues.apache.org/jira/browse/HDFS-200. I think support for what 
HBase needs from HDFS is almost there. Alternatively you could consider a 
different type of underlying filesystem -- Lustre, Gluster, etc.

However, you say that HBase crashes for a very small table. I wonder why that 
is. For example I have a table with more than 1TB of data and hundreds of 
regions over 4 servers only and HBase is stable. What hardware resources are 
you running HBase on (CPU, RAM, disk, number of servers, etc.)? What other 
processes are you running on those servers? What are your observations around 
the time of the crash? For example, have you noticed exceptions in the HBase or 
DFS logs? I think getting to the bottom of this would help you more than 
explicit flushing as a workaround.

   - Andy

2009/8/23 Lucas Nazário dos Santos <[email protected]>

> Hi,
>
> I have a very small table under HBase that I use to store control data of
> my program. If HBase crashes and I have to kill it, the table goes away.
> However, I can still see the table inside HDFS.
>
> What is more interesting is that if I stop HBase properly, the table seems
> to be persisted and I don't lose it anymore as consequence of a crash.
>
> Because the table is very small, I think it's not being flushed to HDFS
> (the table itself or/and the meta info). I try to flush everything with
> HTable#flushCommits and HBaseAdmin#flush with no success. Does anybody has
> already gone through it? How can a flush EVERYTHING to HDFS so I won't lose
> data as consequence of a kill -9? Any special configuration inside
> hbase-site.xml?
>
> I'm using HBase 0.20.0 RC2 together with Hadoop 0.20.0.
>
> Thanks,
> Lucas
>



-- 
Best Regards,
Chen Xinli



      

Reply via email to