Tianying: Have you seen the design doc attached to HBASE-7912 'HBase Backup/Restore Based on HBase Snapshot' ?
Cheers > > > On Tue, Mar 25, 2014 at 2:38 PM, Tianying Chang <[email protected]> > wrote: > > > > > Hi, > > > > > > I need a new snapshot policy. Basically, I cannot disable the table, > but > > I > > > also don't need the snapshot to be that "consistent" where all RS > > > coordinated to flush the region before taking the snapshot since it > slow > > > down production cluster when flush take too long. It is OK for me if > the > > > snapshot missed the data in memstore because I will use WALPlayer to > fill > > > the data gap that is not in the snapshot but has been persisted (in > WAL). > > > So I should have no data loss. > > > > > > As a quick hack way to test this in my hbase backup workflow, I just > add > > a > > > config key, and skip the flushcache() in file > > > *regionserver/snapshot/FlushSnapshotSubprocedure.java*, something like > > > below. It seems works fine for me, where all data are recovered in a > new > > > cluster after running WALPlayer. > > > > > > Does anyone see any problem like data corruption, etc? > > > > > > > > > LOG.debug("Flush Snapshotting region " + region.toString() + " > > > started..."); > > > if (noFlushNeeded) > > > { > > > LOG.debug("No flush before taking snapshot"); > > > } else > > > { > > > region.flushcache(); > > > } > > > > > > If there is no data corruption issue with this policy, I can add an > > > parameter from hbase shell, so that people can dynamically decide when > to > > > use no-flush snapshot. > > > > > > Thanks > > > Tian-Ying > > > > > > On Tue, Mar 25, 2014 at 2:08 PM, Tianying Chang <[email protected]> > > wrote: > > > > > > > Hi, > > > > > > > > I need a new snapshot policy which sits in between the disabled and > > > > flushed version. So, basically: > > > > I cannot disable the table, but I also don't need the snapshot to be > > that > > > > "consistent" where all RS coordinated to flush the region before > taking > > > the > > > > snapshot. > > > > > > > > > >
