Looking at the changes since release 0.94.7, I found: HBASE-8655 Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) HBASE-8698 potential thread creation in MetaScanner.metaScan
If possible, can you upgrade your cluster ? Cheers On Mon, Jan 27, 2014 at 8:02 PM, Ted Yu <[email protected]> wrote: > Do you see the following (from > HConnectionManager$HConnectionImplementation#locateRegionInMeta) ? > > if (LOG.isDebugEnabled()) { > LOG.debug("locateRegionInMeta parentTable=" + > Bytes.toString(parentTable) + ", metaLocation=" + > ((metaLocation == null)? "null": "{" + metaLocation + "}") > + > ", attempt=" + tries + " of " + > this.numRetries + " failed; retrying after sleep of " + > > > On Mon, Jan 27, 2014 at 7:51 PM, Varun Sharma <[email protected]> wrote: > >> Actually not sometimes but we are always seeing a large # of .META. reads >> every 5 minutes. >> >> >> On Mon, Jan 27, 2014 at 7:47 PM, Varun Sharma <[email protected]> >> wrote: >> >> > The default one with 0.94.7... - I dont see any of those logs. Also we >> > turned off the balancer switch - but looks like sometimes we still see a >> > large number of requests to .META. table every 5 minutes. >> > >> > Varun >> > >> > >> > On Mon, Jan 27, 2014 at 7:37 PM, Ted Yu <[email protected]> wrote: >> > >> >> In HMaster#balance(), we have (same for 0.94 and 0.96): >> >> >> >> for (RegionPlan plan: plans) { >> >> LOG.info("balance " + plan); >> >> >> >> Do you see such log in master log ? >> >> >> >> >> >> On Mon, Jan 27, 2014 at 7:26 PM, Varun Sharma <[email protected]> >> >> wrote: >> >> >> >> > We are seeing one other issue with high read latency (p99 etc.) on >> one >> >> of >> >> > our read heavy hbase clusters which is correlated with the balancer >> >> runs - >> >> > every 5 minutes. >> >> > >> >> > If there is no balancing to do, does the balancer only scan the table >> >> every >> >> > 5 minutes - does it do anything on top of that if the regions are >> >> balanced >> >> > ? >> >> > >> >> > Varun >> >> > >> >> >> > >> > >> > >
