[ 
https://issues.apache.org/jira/browse/HBASE-24779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170496#comment-17170496
 ] 

Josh Elser commented on HBASE-24779:
------------------------------------

{noformat}
2020-08-03 22:01:53,065 INFO [mizar:16020Replication Statistics #0] 
regionserver.Replication(271): Global stats: WAL Edits Buffer Used=684288B, 
Limit=268435456B
Normal source for cluster 1: Total replicated edits: 46200, current progress: 
walGroup [mizar.local%2C16020%2C1596506315322]: currently replicating from: 
hdfs://mizar.local:8020/hbase-3/WALs/mizar.local,16020,1596506315322/mizar.local%2C16020%2C1596506315322.1596506319356
 at position: 4292571{{noformat}
{noformat}
{
  "name": "Hadoop:service=HBase,name=RegionServer,sub=Replication",
  "modelerType": "RegionServer,sub=Replication",
  "tag.Context": "regionserver",
  "tag.Hostname": "mizar.local",
...
  "source.walReaderEditsBufferUsage": 684288,
...
}{noformat}
Some example updated output with a sleep-and-do-nothing replication endpoint. 
I'll put up a PR shortly and include this on GH too.

> Improve insight into replication WAL readers hung on checkQuota
> ---------------------------------------------------------------
>
>                 Key: HBASE-24779
>                 URL: https://issues.apache.org/jira/browse/HBASE-24779
>             Project: HBase
>          Issue Type: Task
>          Components: Replication
>            Reporter: Josh Elser
>            Assignee: Josh Elser
>            Priority: Minor
>
> Helped a customer this past weekend who, with a large number of 
> RegionServers, has some RegionServers which replicated data to a peer without 
> issues while other RegionServers did not.
> The number of queue logs varied over the past 24hrs in the same manner. Some 
> spikes in queued logs into 100's of logs, but other times, only 1's-10's of 
> logs were queued.
> We were able to validate that there were "good" and "bad" RegionServers by 
> creating a test table, assigning it to a regionserver, enabling replication 
> on that table, and validating if the local puts were replicated to a peer. On 
> a good RS, data was replicated immediately. On a bad RS, data was never 
> replicated (at least, on the order of 10's of minutes which we waited).
> On the "bad RS", we were able to observe that the \{{wal-reader}} thread(s) 
> on that RS were spending time in a Thread.sleep() in a different location 
> than the other. Specifically it was sitting in the 
> {{ReplicationSourceWALReader#checkQuota()}}'s sleep call, _not_ the 
> {{handleEmptyWALBatch()}} method on the same class.
> My only assumption is that, somehow, these RegionServers got into a situation 
> where they "allocated" memory from the quota but never freed it. Then, 
> because the WAL reader thinks it has no free memory, it blocks indefinitely 
> and there are no pending edits to ship and (thus) free that memory. A cursory 
> glance at the code gives me a _lot_ of anxiety around places where we don't 
> properly clean it up (e.g. batches that fail to ship, dropping a peer). As a 
> first stab, let me add some more debugging so we can actually track this 
> state properly for the operators and their sanity.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to