[
https://issues.apache.org/jira/browse/HBASE-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229582#comment-15229582
]
Yu Li commented on HBASE-15594:
-------------------------------
bq. There is no ConnectionId going on
Oh I meant the below codes in {{RpcClientImpl#getConnection}}
{code}
ConnectionId remoteId =
new ConnectionId(ticket, call.md.getService().getName(), addr);
synchronized (connections) {
connection = connections.get(remoteId);
if (connection == null) {
connection = createConnection(remoteId, this.codec, this.compressor);
connections.put(remoteId, connection);
}
}
{code}
When pool size larger than 1, there'll be more connection created to the RS.
bq. For each connection, there is its own zk connection which makes no sense...
Need to fix this
Agreed, good catch sir!
bq. Setting hbase.client.ipc.pool.size to #cpus doubles my ycsb throughput. At
the default of 1 or 2 or 12, my throughput is way less.
This is the case running a single YCSB instance? If so, I guess I find the key
point. When we hit YCSB-651 in our test, we were running 8 YCSB instances each
with 100 threads, so there would be 800 connections created, way more than
#cpus. Maybe in my case setting hbase.client.ipc.pool.size to #cpus/8 is the
best choice, let me check and confirm.
> [YCSB] Improvements
> -------------------
>
> Key: HBASE-15594
> URL: https://issues.apache.org/jira/browse/HBASE-15594
> Project: HBase
> Issue Type: Umbrella
> Reporter: stack
> Priority: Critical
>
> Running YCSB and getting good results is an arcane art. For example, in my
> testing, a few handlers (100) with as many readers as I had CPUs (48), and
> upping connections on clients to same as #cpus made for 2-3x the throughput.
> The above config changes came of lore; which configurations need tweaking is
> not obvious going by their names, there were no indications from the app on
> where/why we were blocked or on which metrics are important to consider. Nor
> was any of this stuff written down in docs.
> Even still, I am stuck trying to make use of all of the machine. I am unable
> to overrun a server though 8 client nodes trying to beat up a single node
> (workloadc, all random-read, with no data returned -p readallfields=false).
> There is also a strange phenomenon where if I add a few machines, rather than
> 3x the YCSB throughput when 3 nodes in cluster, each machine instead is doing
> about 1/3rd.
> This umbrella issue is to host items that improve our defaults and noting how
> to get good numbers running YCSB. In particular, I want to be able to
> saturate a machine.
> Here are the configs I'm currently working with. I've not done the work to
> figure client-side if they are optimal (weird is how big a difference
> client-side changes can make -- need to fix this). On my 48 cpu machine, I
> can do about 370k random reads a second from data totally cached in
> bucketcache. If I short-circuit the user gets so they don't do any work but
> return immediately, I can do 600k ops a second but the CPUs are at 60-70%
> only. I cannot get them to go above this. Working on it.
> {code}
> <property>
> <name>
> hbase.ipc.server.read.threadpool.size
> </name>
> <value>48</value>
> </property>
> <property>
> <name>
> hbase.regionserver.handler.count
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.client.ipc.pool.size
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.htable.threads.max
> </name>
> <value>48</value>
> </property>
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)