[
https://issues.apache.org/jira/browse/HBASE-27947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17736456#comment-17736456
]
Bryan Beaudreault commented on HBASE-27947:
-------------------------------------------
Thanks for all of those useful insights! I will try to dig into those today. To
clarify a few things:
# Yes, we are mostly seeing the issue with reads. I haven’t tested writes in
isolation, but will try that later.
# We have tried both tcnative (OpenSSL) and jdk default (jdk17), both
exhibited the issue. I think tcnative helped but did not eliminate the issue.
# For the last day I’ve been working on reproducing this with a synthetic load
test. I’ve been able to do it, even with max result size set to 1mb. So this
confirms what you said. However, my synthetic test only uses about < 20
connections per server. So there must be something that is using bigger buffers
or something beyond connections.
# The OOM is always when netty is trying to allocate a new 4mb buffer in the
PoolArena. Not sure if that’s obvious or tells us something. I will try to
attach a stacktrace in a couple hours.
> RegionServer OOM under load when TLS is enabled
> -----------------------------------------------
>
> Key: HBASE-27947
> URL: https://issues.apache.org/jira/browse/HBASE-27947
> Project: HBase
> Issue Type: Bug
> Components: rpc
> Affects Versions: 2.6.0
> Reporter: Bryan Beaudreault
> Priority: Critical
>
> We are rolling out the server side TLS settings to all of our QA clusters.
> This has mostly gone fine, except on 1 cluster. Most clusters, including this
> one have a sampled {{nettyDirectMemory}} usage of about 30-100mb. This
> cluster tends to get bursts of traffic, in which case it would typically jump
> to 400-500mb. Again this is sampled, so it could have been higher than that.
> When we enabled SSL on this cluster, we started seeing bursts up to at least
> 4gb. This exceeded our {{{}-XX:MaxDirectMemorySize{}}}, which caused OOM's
> and general chaos on the cluster.
>
> We've gotten it under control a little bit by setting
> {{-Dorg.apache.hbase.thirdparty.io.netty.maxDirectMemory}} and
> {{{}-Dorg.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible{}}}.
> We've set netty's maxDirectMemory to be approx equal to
> ({{{}-XX:MaxDirectMemorySize - BucketCacheSize - ReservoirSize{}}}). Now we
> are seeing netty's own OutOfDirectMemoryError, which is still causing pain
> for clients but at least insulates the other components of the regionserver.
>
> We're still digging into exactly why this is happening. The cluster clearly
> has a bad access pattern, but it doesn't seem like SSL should increase the
> memory footprint by 5-10x like we're seeing.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)