@Mike FWIW, besides checking the JIRAs and codes, the talk Duo gave in our HBaseCon 2016 may help you better understand the whole picture, please check page 14 to 20 of this presentation <https://www.slideshare.net/HBaseCon/apache-hbase-improvements-and-practices-at-xiaomi> on slideshare.
Best Regards, Yu On 27 March 2018 at 14:26, 张铎(Duo Zhang) <palomino...@gmail.com> wrote: > 2018-03-27 12:35 GMT+08:00 Mike Drob <md...@apache.org>: > > > Hi folks, > > > > I've been working on some of the docs relating to the upcoming 2.0 > release > > and have struggled to find content around AsyncWAL. My impression is that > > this is a pretty important new feature, yet there's nothing in the ref > > guide about it. > > > > Does it have a different name that I'm not familiar with? > > > > If it's not in the ref guide, should I file a JIRA issue for somebody to > > generate that content? Specific things that I'd be looking for are: > > - How to enable/disable > > > See HBASE-15536, just like the old way, config hbase.wal.provider > > > - How does this impact data durability, MTTR, failover scenarios, etc. > > > Does not impact these things. > > > - How does this impact replication > > > Ditto. > > > - Which configuration knobs exist and when would I want to tune them > > > Usually you do not need to tune anything... > Before committing HBASE-15536 we have done a lot of performance testings. > There are two configs which may effect performance, one > is hbase.wal.batch.size, and the other > is hbase.wal.async.use-shared-event-loop. But it is hard to say how to > tune > them... > And another thing is that, with AsyncFSWAL we can set a lower timeout when > writing WAL, but now it just shares the common dfs configuration. Maybe we > should file an issue for it. > > > > > As a last resort, I can try to dig through RNs in existing issues, but > > that's been pretty hit or miss (mostly miss) for me so far too. > > > > I think at least we need to mention the reason why we introduce > AsyncFSWAL > and make it default for 2.0 in our refguide. > > > Thanks, > > Mike > > >