[ 
https://issues.apache.org/jira/browse/HBASE-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229686#comment-15229686
 ] 

stack commented on HBASE-15594:
-------------------------------

First, thanks for chiming in here [~carp84] Helps to bounce experience off 
another.

bq. This is the case running a single YCSB instance? 

Yes.

I tried doing 1 Connection of 4 and then running many instances but my numbers 
would not budge. When I set it to #cpus, my throughput doubled.

Running multiple instances of the connections==#cpus doesn't seem to change my 
throughput which is odd.

I seem to have 'fixed' my issue where running 3 RS did not triple my 
throughput. Rather, I was seeing that each RS was getting 1/3rd of what the 
lone RS was getting. My issue was that I was getting this in the zk ensemble 
logs:

{code}
2016-04-06 14:55:36,218 WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2222] 
server.NIOServerCnxnFactory: Too many connections from /10.17.240.23 - max is 
300
2016-04-06 14:55:36,218 WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2222] 
server.NIOServerCnxnFactory: Too many connections from /10.17.240.27 - max is 
300
2016-04-06 14:55:36,218 WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2222] 
server.NIOServerCnxnFactory: Too many connections from /10.17.240.23 - max is 
300
{code}

As a workaround, I set:


354   <property>
355     <name>hbase.zookeeper.property.maxClientCnxns</name>
356     <value>3000</value>
357     <description>Property from ZooKeeper's config zoo.cfg.
358     Limit on number of concurrent connections (at the socket level) that a
359     single client, identified by IP address, may make to a single member of
360     the ZooKeeper ensemble. Set high to avoid zk connection issues running
361     standalone and pseudo-distributed.</description>
362   </property>

.... to 3000 from 300.

Need to fix the zk issue.

There is also something up w/ naming of the connections...  will be back.




> [YCSB] Improvements
> -------------------
>
>                 Key: HBASE-15594
>                 URL: https://issues.apache.org/jira/browse/HBASE-15594
>             Project: HBase
>          Issue Type: Umbrella
>            Reporter: stack
>            Priority: Critical
>
> Running YCSB and getting good results is an arcane art. For example, in my 
> testing, a few handlers (100) with as many readers as I had CPUs (48), and 
> upping connections on clients to same as #cpus made for 2-3x the throughput. 
> The above config changes came of lore; which configurations need tweaking is 
> not obvious going by their names, there were no indications from the app on 
> where/why we were blocked or on which metrics are important to consider. Nor 
> was any of this stuff written down in docs.
> Even still, I am stuck trying to make use of all of the machine. I am unable 
> to overrun a server though 8 client nodes trying to beat up a single node 
> (workloadc, all random-read, with no data returned -p  readallfields=false). 
> There is also a strange phenomenon where if I add a few machines, rather than 
> 3x the YCSB throughput when 3 nodes in cluster, each machine instead is doing 
> about 1/3rd.
> This umbrella issue is to host items that improve our defaults and noting how 
> to get good numbers running YCSB. In particular, I want to be able to 
> saturate a machine.
> Here are the configs I'm currently working with. I've not done the work to 
> figure client-side if they are optimal (weird is how big a difference 
> client-side changes can make -- need to fix this). On my 48 cpu machine, I 
> can do about 370k random reads a second from data totally cached in 
> bucketcache. If I short-circuit the user gets so they don't do any work but 
> return immediately, I can do 600k ops a second but the CPUs are at 60-70% 
> only. I cannot get them to go above this. Working on it.
> {code}
> <property>
> <name>
> hbase.ipc.server.read.threadpool.size
> </name>
> <value>48</value>
> </property>
> <property>
> <name>
>     hbase.regionserver.handler.count
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.client.ipc.pool.size
> </name>
> <value>100</value>
> </property>
> <property>
> <name>
> hbase.htable.threads.max
> </name>
> <value>48</value>
> </property>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to