Thanks Liang! Found the logs. I had gone overboard with my grep's and missed the "Too many hlogs" line for the regions that I was trying to debug.
A few sample log lines: 2013-06-27 07:42:49,602 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s): 0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14 2013-06-27 08:10:29,996 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Too many hlogs: logs=33, maxlogs=32; forcing flush of 1 regions(s): 0e940167482d42f1999b29a023c7c18a 2013-06-27 08:17:44,719 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s): 0e940167482d42f1999b29a023c7c18a, e380fd8a7174d34feb903baa97564e08 2013-06-27 08:23:45,357 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Too many hlogs: logs=33, maxlogs=32; forcing flush of 3 regions(s): 0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14, e380fd8a7174d34feb903baa97564e08 Any pointers on what's the best practice for avoiding this scenario ? Thanks, Viral On Thu, Jun 27, 2013 at 1:21 AM, 谢良 <xieli...@xiaomi.com> wrote: > If reached memstore global up-limit, you'll find "Blocking updates on" > in your files(see MemStoreFlusher.reclaimMemStoreMemory); > If it's caused by too many log files, you'll find "Too many hlogs: > logs="(see HLog.cleanOldLogs) > Hope it's helpful for you:) > > Best, > Liang >