[
https://issues.apache.org/jira/browse/HBASE-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andrew Purtell updated HBASE-18116:
-----------------------------------
Resolution: Fixed
Hadoop Flags: Reviewed
Fix Version/s: 1.5.0
2.1.0
3.0.0
Release Note: Before this change we would incorrectly include the size of
enqueued store files for bulk replication in the calculation for determining
whether or not to rate limit the transfer of WAL edits. Because bulk
replication uses a separate and asynchronous mechanism for file transfer this
could incorrectly limit the batch sizes for WAL replication if bulk replication
in progress, with negative impact on latency and throughput.
Status: Resolved (was: Patch Available)
> Replication source in-memory accounting should not include bulk transfer
> hfiles
> -------------------------------------------------------------------------------
>
> Key: HBASE-18116
> URL: https://issues.apache.org/jira/browse/HBASE-18116
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Reporter: Andrew Purtell
> Assignee: Xu Cang
> Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
> Attachments: HBASE-18116.master.001.patch,
> HBASE-18116.master.002.patch
>
>
> In ReplicationSourceWALReaderThread we maintain a global quota on enqueued
> replication work for preventing OOM by queuing up too many edits into queues
> on heap. When calculating the size of a given replication queue entry, if it
> has associated hfiles (is a bulk load to be replicated as a batch of hfiles),
> we get the file sizes and include the sum. We then apply that result to the
> quota. This isn't quite right. Those hfiles will be pulled by the sink as a
> file copy, not pushed by the source. The cells in those files are not queued
> in memory at the source and therefore shouldn't be counted against the quota.
> Related, the sum of the hfile sizes are also included when checking if queued
> work exceeds the configured replication queue capacity, which is by default
> 64 MB. HFiles are commonly much larger than this.
> So what happens is when we encounter a bulk load replication entry typically
> both the quota and capacity limits are exceeded, we break out of loops, and
> send right away. What is transferred on the wire via HBase RPC though has
> only a partial relationship to the calculation.
> Depending how you look at it, it makes sense to factor hfile file sizes
> against replication queue capacity limits. The sink will be occupied
> transferring those files at the HDFS level. Anyway, this is how we have been
> doing it and it is too late to change now. I do not however think it is
> correct to apply hfile file sizes against a quota for in memory state on the
> source. The source doesn't queue or even transfer those bytes.
> Something I noticed while working on HBASE-18027.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)