Hey, This can happen in couple of scenarios:
1. If the "writeBuffer" value is quite large and the writes are too little for "autoflush" to be called [default is 2mb for writeBuffer] 2. You have set the "autoFlush" to false and never call flushCommits If you haven't configured these properties in hbase-site.xml Immediate soln: > puttable.setAutoFlush(true); // for your table in the code > put.setWriteToWAL(true); // this is for more reliability I'm sure this would persist every single writes in table, but you need to fine-tune these properties for your reliability and performance levels. -Thanks, Dani http://www.cc.gatech.edu/~iar3/ On Fri, Jan 28, 2011 at 8:37 PM, Something Something < [email protected]> wrote: > Apologies for my dumbness. I know it's some property that I am not setting > correctly. But every time I stop & start HBase & Hadoop I either lose all > my tables or loose rows on tables in HBase. > > Here's what various files contain: > > *core-site.xml* > <configuration> > <property> > <name>fs.default.name</name> > <value>hdfs://localhost:9000</value> > </property> > <property> > <name>hadoop.tmp.dir</name> > <value>/usr/xxx/hdfs</value> > </property> > </configuration> > > *hdfs-site.xml* > <configuration> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > <property> > <name>dfs.name.dir</name> > <value>/usr/xxx/hdfs/name</value> > </property> > > <property> > <name>dfs.data.dir</name> > <value>/usr/xxx/hdfs/data</value> > </property> > > *mapred-site.xml* > <configuration> > <property> > <name>mapred.job.tracker</name> > <value>localhost:9001</value> > </property> > </configuration> > > *hbase-site.xml* > <configuration> > <property> > <name>hbase.rootdir</name> > <value>hdfs://localhost:9000/hbase</value> > </property> > <property> > <name>hbase.tmp.dir</name> > <value>/usr/xxx/hdfs/hbase</value> > </property> > </configuration> > > > What am I missing? Please help. Thanks. >
