Hi all,

  I run my performance testing in random read, but I got the
hfile.block.cache.size = 0 performance is better than default, Is it
possible?

 My cluster (4 nodes):
 Hadoop 0.20.2, HBase 0.20.6
 1 * namenode & hmaster & zookeeper
 3 * datanode & regionserver

 P.S. Replication factor = 3, HBase heap size is 3500mb

There are 10 millions records in my testing table, and per record
approximately 1kb.

* The hfile.block.cache.size = 0*:
==============================================
java benchmark.HReadR *10000* 1
initial cost: 297 ms.
read: 198358 ms
read per: 19.8358 ms
*read thput: 50.4139 ops/sec*
==============================================
java benchmark.HReadR *100000* 1
initial cost: 285 ms.
read: 772474 ms
read per: 7.72474 ms
*read thput: 129.4542 ops/sec*
==============================================
java benchmark.HReadR *10000* 1
initial cost: 291 ms.
read: 43939 ms
read per: 4.3939 ms
*read thput: 227.58826 ops/sec*
==============================================
java benchmark.HReadR *100000* 1
initial cost: 292 ms.
read: 296763 ms
read per: 2.96763 ms
*read thput: 336.96924 ops/sec*
==============================================


* The hfile.block.cache.size = 0.2 (default)*:
==============================================
java benchmark.HReadR *10000* 1
initial cost: 282 ms.
read: 157538 ms
read per: 15.7538 ms
read thput: *63.47675* ops/sec
==============================================
java benchmark.HReadR *100000* 1
initial cost: 292 ms.
read: 983083 ms
read per: 9.83083 ms
read thput: *101.72081* ops/sec
==============================================
java benchmark.HReadR *10000* 1
initial cost: 286 ms.
read: 83260 ms
read per: 8.326 ms
read thput: *120.10569* ops/sec
==============================================
java benchmark.HReadR *100000* 1
initial cost: 288 ms.
read: 839874 ms
read per: 8.39874 ms
read thput: *119.065475* ops/sec
==============================================


Shen

Reply via email to