[
https://issues.apache.org/jira/browse/HBASE-22634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873470#comment-16873470
]
Sebastien Barnoud commented on HBASE-22634:
-------------------------------------------
First of all, apologize to not verify the build before sending the patch: i'm
running on a HDP, and only have Jenkins on HDP code base, not Apache one. I
just report it in my IDE on windows, where i can't test. However on our HDP,
the patch is in production.
The default pool size is set in the default pool
(org/apache/hadoop/hbase/client/HTable.java)
{code:java}
public static ThreadPoolExecutor getDefaultExecutor(Configuration conf) {
int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.MAX_VALUE);
if (maxThreads == 0) {
maxThreads = 1; // is there a better default?
}{code}
So, yes, IMO there is a better default which is hbase.client.max.total.tasks,
that set in BufferedMutatorThreadPoolExecutor:
{code:java}
public static BufferedMutatorThreadPoolExecutor getPoolExecutor(Configuration
conf) {
int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.MAX_VALUE);
if (maxThreads == 0) {
maxThreads = conf.getInt("hbase.client.max.total.tasks",
Integer.MAX_VALUE);
}
if (maxThreads == 0) {
throw new IllegalArgumentException("hbase.client.max.total.tasks must
be >0");
}{code}
Anyway, if the application let the default pool, it MUST set
hbase.htable.threads.max to scale.
Yes, it is intentional that i removed cleanupIdleConnectionTask, because it is
already cleaned up by Netty. When both are present, i get:
{code:java}
WARN [Default-IPC-NioEventLoopGroup-1-15] DefaultPromise:151 - An exception
was thrown by
org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException:
refCnt: 0, increment: 1
at
org.apache.hbase.thirdparty.io.netty.buffer.AbstractReferenceCountedByteBuf.retain0(AbstractReferenceCountedByteBuf.java:67)
at
org.apache.hbase.thirdparty.io.netty.buffer.AbstractReferenceCountedByteBuf.retain(AbstractReferenceCountedByteBuf.java:54)
....
WARN [Executor task launch worker for task 9] ScannerCallable:321 - Ignore,
probably already closed. Current scan:
{"loadColumnFamiliesOnDemand":null,"startRow":" ....
{code}
This is done here org/apache/hadoop/hbase/ipc/NettyRpcConnection.java
{code:java}
private void established(Channel ch) throws IOException {
ChannelPipeline p = ch.pipeline();
String addBeforeHandler = p.context(BufferCallBeforeInitHandler.class).name();
p.addBefore(addBeforeHandler, null,
new IdleStateHandler(0, rpcClient.minIdleTimeBeforeClose, 0,
TimeUnit.MILLISECONDS));
{code}
Then I use FutureTask.class.getDeclaredField("callable"); to be able to compute
some metrics. If you don't want this part, no problems. The performance patch
in not here. It is just an utility class to have counters that i use to do some
fine tuning.
> Improve performance of BufferedMutator
> --------------------------------------
>
> Key: HBASE-22634
> URL: https://issues.apache.org/jira/browse/HBASE-22634
> Project: HBase
> Issue Type: Improvement
> Components: Client
> Affects Versions: 2.1.5
> Environment: HDP 2.6.5
> Linux RedHat
> Reporter: Sebastien Barnoud
> Priority: Major
> Attachments: HBASE-22634.001.branch-2.patch
>
>
> The default ThreadPoolExecutor uses a default size of 1 (property
> hbase.htable.threads.max). When using a size > 1, we still encountered poor
> performance and exception while submitting to the pool (pool exceed its
> capacity).
> This patch propose a fix on different issues encountered when the pool size
> is > 1:
> * thread safety issue
> * concurrent cleanup by Netty and the "legacy" code
> * errors in the backpressure
> * Netty memory leak
> And propose a BufferedMutatorThreadPoolExecutor which:
> * uses hbase.client.max.total.tasks as the default size (instead of 1)
> * some usefull metrics
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)