[ 
https://issues.apache.org/jira/browse/HBASE-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15290493#comment-15290493
 ] 

stack commented on HBASE-15594:
-------------------------------

In YCSB trunk, there is an asynchbase now. I ran some compares. Also tried more 
than one RS on a node. In general, it seems like the asynchbase is able to 
drive more loading... a little bit more (with way less threads in client). Four 
RS on a Node are better than one if you size the handlers and readers 
appropriate; i.e. give each instance 1/4 the "CPU"s (Lets fix this and handlers 
and readers by cpu-count rather than ask users guess what is good!). Single RS 
w/ hbase10 could do 115k random reads (workloadc) with cpus about 35% idle. 
Four RS on single node w/ asynchbase could do about 175K with about 25% idle 
(Six nodes each running a Client with 48 threads -- the number of cpus -- 
against a single node server). Contention I can see in JFR is setting up the 
Scanner (registering the scanner in the Region Map), purging the WeakHashMap of 
bucketcache locks (this is an L1/L2 setup), and TimeRangeTracker. There are 
some rough notes here: 
https://docs.google.com/document/d/1oyzHaue__mdnKEQrgeLVrufRIGqI08AHi7RszDPqmoI/edit?usp=sharing



> [YCSB] Improvements
> -------------------
>
>                 Key: HBASE-15594
>                 URL: https://issues.apache.org/jira/browse/HBASE-15594
>             Project: HBase
>          Issue Type: Umbrella
>            Reporter: stack
>            Priority: Critical
>
> Running YCSB and getting good results is an arcane art. For example, in my 
> testing, a few handlers (100) with as many readers as I had CPUs (48), and 
> upping connections on clients to same as #cpus made for 2-3x the throughput. 
> The above config changes came of lore; which configurations need tweaking is 
> not obvious going by their names, there were no indications from the app on 
> where/why we were blocked or on which metrics are important to consider. Nor 
> was any of this stuff written down in docs.
> Even still, I am stuck trying to make use of all of the machine. I am unable 
> to overrun a server though 8 client nodes trying to beat up a single node 
> (workloadc, all random-read, with no data returned -p  readallfields=false). 
> There is also a strange phenomenon where if I add a few machines, rather than 
> 3x the YCSB throughput when 3 nodes in cluster, each machine instead is doing 
> about 1/3rd.
> This umbrella issue is to host items that improve our defaults and noting how 
> to get good numbers running YCSB. In particular, I want to be able to 
> saturate a machine.
> Here are the configs I'm currently working with. I've not done the work to 
> figure client-side if they are optimal (weird is how big a difference 
> client-side changes can make -- need to fix this). On my 48 cpu machine, I 
> can do about 370k random reads a second from data totally cached in 
> bucketcache. If I short-circuit the user gets so they don't do any work but 
> return immediately, I can do 600k ops a second but the CPUs are at 60-70% 
> only. I cannot get them to go above this. Working on it.
> {code}
> <property>
> <name>
> hbase.ipc.server.read.threadpool.size
> </name>
> <value>48</value>
> </property>
> <property>
> <name>
>     hbase.regionserver.handler.count
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.client.ipc.pool.size
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.htable.threads.max
> </name>
> <value>48</value>
> </property>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to