[
https://issues.apache.org/jira/browse/HBASE-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266349#comment-14266349
]
stack commented on HBASE-12684:
-------------------------------
Slower. Almost 8k requests a second as opposed to almost 10k with previous
patch and almost 12k for old client.
{code}
"TestClient-0" #98 prio=5 os_prio=0 tid=0x00007f6cc9fbf800 nid=0x3f6b in
Object.wait() [0x00007f6ca0bbb000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:460)
at
io.netty.util.concurrent.DefaultPromise.await0(DefaultPromise.java:355)
- locked <0x00000000fe21ea78> (a org.apache.hadoop.hbase.ipc.AsyncCall)
at
io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:266)
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:42)
at org.apache.hadoop.hbase.ipc.AsyncCall.get(AsyncCall.java:142)
at org.apache.hadoop.hbase.ipc.AsyncCall.get(AsyncCall.java:43)
at
org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:164)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:30860)
at org.apache.hadoop.hbase.client.HTable$4.call(HTable.java:873)
at org.apache.hadoop.hbase.client.HTable$4.call(HTable.java:864)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:881)
at
org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testRow(PerformanceEvaluation.java:1253)
at
org.apache.hadoop.hbase.PerformanceEvaluation$Test.testTimed(PerformanceEvaluation.java:1039)
at
org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:1021)
at
org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1515)
at
org.apache.hadoop.hbase.PerformanceEvaluation$1.call(PerformanceEvaluation.java:408)
at
org.apache.hadoop.hbase.PerformanceEvaluation$1.call(PerformanceEvaluation.java:403)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
Here is an actual write:
{code}
"TestClient-1" #99 prio=5 os_prio=0 tid=0x00007f6cc9fc0800 nid=0x3f6c runnable
[0x00007f6ca0aba000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.interrupt(Native Method)
at sun.nio.ch.EPollArrayWrapper.interrupt(EPollArrayWrapper.java:317)
at sun.nio.ch.EPollSelectorImpl.wakeup(EPollSelectorImpl.java:193)
- locked <0x00000000fb1ebb70> (a java.lang.Object)
at io.netty.channel.nio.NioEventLoop.wakeup(NioEventLoop.java:591)
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:735)
at
io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:884)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:735)
at
io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
at
io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
at
io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
at
io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
at
org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:433)
at
org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:324)
at
org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethodWithPromise(AsyncRpcChannel.java:346)
at
org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:161)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:30860)
at org.apache.hadoop.hbase.client.HTable$4.call(HTable.java:873)
at org.apache.hadoop.hbase.client.HTable$4.call(HTable.java:864)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:881)
at
org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testRow(PerformanceEvaluation.java:1253)
at
org.apache.hadoop.hbase.PerformanceEvaluation$Test.testTimed(PerformanceEvaluation.java:1039)
at
org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:1021)
at
org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1515)
at
org.apache.hadoop.hbase.PerformanceEvaluation$1.call(PerformanceEvaluation.java:408)
at
org.apache.hadoop.hbase.PerformanceEvaluation$1.call(PerformanceEvaluation.java:403)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
I don't see allocation of buffers in the thread dumps I've taken (not since I
moved from directBuffer to buffer (though you are saying this was doing direct
buffers...)
{code}
Performance counter stats for '/home/stack/hadoop/bin/hadoop --config
/home/stack/conf_hadoop org.apache.hadoop.hbase.PerformanceEvaluation
--nomapred --rows=1000000 randomRead 5':
1681508.816798 task-clock # 2.479 CPUs utilized
16,052,372 context-switches # 0.010 M/sec
1,348,428 CPU-migrations # 0.802 K/sec
99,284 page-faults # 0.059 K/sec
2,945,942,052,546 cycles # 1.752 GHz
[83.31%]
2,155,971,929,043 stalled-cycles-frontend # 73.18% frontend cycles idle
[83.35%]
1,244,094,059,178 stalled-cycles-backend # 42.23% backend cycles idle
[66.65%]
1,773,552,341,725 instructions # 0.60 insns per cycle
# 1.22 stalled cycles per insn
[83.34%]
361,538,087,564 branches # 215.008 M/sec
[83.36%]
6,920,128,168 branch-misses # 1.91% of all branches
[83.34%]
678.298350345 seconds time elapsed
{code}
> Add new AsyncRpcClient
> ----------------------
>
> Key: HBASE-12684
> URL: https://issues.apache.org/jira/browse/HBASE-12684
> Project: HBase
> Issue Type: Improvement
> Components: Client
> Reporter: Jurriaan Mous
> Assignee: Jurriaan Mous
> Attachments: HBASE-12684-DEBUG2.patch, HBASE-12684-DEBUG3.patch,
> HBASE-12684-v1.patch, HBASE-12684-v10.patch, HBASE-12684-v11.patch,
> HBASE-12684-v12.patch, HBASE-12684-v13.patch, HBASE-12684-v14.patch,
> HBASE-12684-v15.patch, HBASE-12684-v16.patch, HBASE-12684-v17.patch,
> HBASE-12684-v17.patch, HBASE-12684-v18.patch, HBASE-12684-v19.1.patch,
> HBASE-12684-v19.patch, HBASE-12684-v19.patch, HBASE-12684-v2.patch,
> HBASE-12684-v20-heapBuffer.patch, HBASE-12684-v20.patch,
> HBASE-12684-v21-heapBuffer.1.patch, HBASE-12684-v21-heapBuffer.patch,
> HBASE-12684-v21.patch, HBASE-12684-v3.patch, HBASE-12684-v4.patch,
> HBASE-12684-v5.patch, HBASE-12684-v6.patch, HBASE-12684-v7.patch,
> HBASE-12684-v8.patch, HBASE-12684-v9.patch, HBASE-12684.patch, requests.png
>
>
> With the changes in HBASE-12597 it is possible to add new RpcClients. This
> issue is about adding a new Async RpcClient which would enable HBase to do
> non blocking protobuf service communication.
> Besides delivering a new AsyncRpcClient I would also like to ask the question
> what it would take to replace the current RpcClient? This would enable to
> simplify async code in some next issues.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)