[
https://issues.apache.org/jira/browse/HBASE-28221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816282#comment-17816282
]
David Manning commented on HBASE-28221:
---------------------------------------
Alerting on {{flushQueueLength}} is not really the same. You will capture every
delayed flush from {{PeriodicMemstoreFlusher}} as well. So it will capture both
cases where you enqueue a delayed flush: either for a very busy region beyond
{{blockingStoreFiles}} limit or for a region which is mostly idle and flushing
after memstore edits are an hour old.
Alerting on {{blockedRequestsCount}} will give you a stronger signal, because
this is when you have a {{RegionTooBusyException}} due to the memstore being
full, waiting on a delayed flush.
But if you want to alert on a delayed flush without a full memstore, I don't
know that it could be done today without adding a new metric. If you have
site-wide settings for {{blockingStoreFiles}}, you could alert when
{{maxStoreFileCount}} is above, or near, {{blockingStoreFiles}}. But if it
varies by table, you would have to alert per-table.
So there could still be some value in adding this type of metric (but consider
whether alerting on client impact, i.e. {{RegionTooBusyException}} and
{{blockedRequestsCount}} would be sufficient first.) [~rkrahul324] [~vjasani]
> Introduce regionserver metric for delayed flushes
> -------------------------------------------------
>
> Key: HBASE-28221
> URL: https://issues.apache.org/jira/browse/HBASE-28221
> Project: HBase
> Issue Type: Improvement
> Affects Versions: 2.4.17, 2.5.6
> Reporter: Viraj Jasani
> Assignee: Rahul Kumar
> Priority: Major
> Fix For: 2.4.18, 2.7.0, 2.5.8, 3.0.0-beta-2, 2.6.1
>
>
> If compaction is disabled temporarily to allow stabilizing hdfs load, we can
> forget re-enabling the compaction. This can result into flushes getting
> delayed for "hbase.hstore.blockingWaitTime" time (90s). While flushes do
> happen eventually after waiting for max blocking time, it is important to
> realize that any cluster cannot function well with compaction disabled for
> significant amount of time.
>
> We would also block any write requests until region is flushed (90+ sec, by
> default):
> {code:java}
> 2023-11-27 20:40:52,124 WARN [,queue=18,port=60020] regionserver.HRegion -
> Region is too busy due to exceeding memstore size limit.
> org.apache.hadoop.hbase.RegionTooBusyException: Above memstore limit,
> regionName=table1,1699923733811.4fd5e52e2133df1e347f32c646f23ab4.,
> server=server-1,60020,1699421714454, memstoreSize=1073820928,
> blockingMemStoreSize=1073741824
> at
> org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4200)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3264)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3215)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:967)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:895)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2524)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2432)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
> {code}
>
> Delayed flush logs:
> {code:java}
> LOG.warn("{} has too many store files({}); delaying flush up to {} ms",
> region.getRegionInfo().getEncodedName(), getStoreFileCount(region),
> this.blockingWaitTime); {code}
> Suggestion: Introduce regionserver metric (MetricsRegionServerSource) for the
> num of flushes getting delayed due to too many store files.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)