I tend to think that by default, all edits should be synced. Once a HTable.put method returns, the client can count on that data not being lost. The client can then disableAutoFlush, adjust its write buffer and use commit when it doesn't need every individual write flushed. I am definitely curious to hear the thoughts of the developers and other users however. Just my 2 cents.
Dave On Sat, Nov 14, 2009 at 4:37 PM, Jean-Daniel Cryans <jdcry...@apache.org>wrote: > Hi dev! > > Hadoop 0.21 now has a reliable append and flush feature and this gives > us the opportunity to review some assumptions. The current situation: > > - Every edit going to a catalog table is flushed so there's no data loss. > - The user tables edits are flushed every > hbase.regionserver.flushlogentries which by default is 100. > > Should we now set this value to 1 in order to have more durable but > slower inserts by default? Please speak up. > > Thx, > > J-D >