[
https://issues.apache.org/jira/browse/HBASE-24779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230115#comment-17230115
]
Hudson commented on HBASE-24779:
--------------------------------
Results for branch branch-2.3
[build #102 on
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/102/]:
(/) *{color:green}+1 overall{color}*
----
details (if available):
(/) {color:green}+1 general checks{color}
-- For more information [see general
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/102/General_20Nightly_20Build_20Report/]
(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2)
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/102/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]
(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3)
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/102/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/102/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 source release artifact{color}
-- See build output for details.
(/) {color:green}+1 client integration test{color}
> Improve insight into replication WAL readers hung on checkQuota
> ---------------------------------------------------------------
>
> Key: HBASE-24779
> URL: https://issues.apache.org/jira/browse/HBASE-24779
> Project: HBase
> Issue Type: Task
> Components: Replication
> Reporter: Josh Elser
> Assignee: Josh Elser
> Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> Helped a customer this past weekend who, with a large number of
> RegionServers, has some RegionServers which replicated data to a peer without
> issues while other RegionServers did not.
> The number of queue logs varied over the past 24hrs in the same manner. Some
> spikes in queued logs into 100's of logs, but other times, only 1's-10's of
> logs were queued.
> We were able to validate that there were "good" and "bad" RegionServers by
> creating a test table, assigning it to a regionserver, enabling replication
> on that table, and validating if the local puts were replicated to a peer. On
> a good RS, data was replicated immediately. On a bad RS, data was never
> replicated (at least, on the order of 10's of minutes which we waited).
> On the "bad RS", we were able to observe that the \{{wal-reader}} thread(s)
> on that RS were spending time in a Thread.sleep() in a different location
> than the other. Specifically it was sitting in the
> {{ReplicationSourceWALReader#checkQuota()}}'s sleep call, _not_ the
> {{handleEmptyWALBatch()}} method on the same class.
> My only assumption is that, somehow, these RegionServers got into a situation
> where they "allocated" memory from the quota but never freed it. Then,
> because the WAL reader thinks it has no free memory, it blocks indefinitely
> and there are no pending edits to ship and (thus) free that memory. A cursory
> glance at the code gives me a _lot_ of anxiety around places where we don't
> properly clean it up (e.g. batches that fail to ship, dropping a peer). As a
> first stab, let me add some more debugging so we can actually track this
> state properly for the operators and their sanity.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)