Hi, All I am benchmarking hbase. My HDFS clusters includes 4 servers (Dell 860, with 2 GB RAM). One NameNode, one JobTracker, 2 DataNodes.
My HBase Cluster also comprise 4 servers too. One Master, 2 region and one ZooKeeper. (Dell 860, with 2 GB RAM) I runned the org.apache.hadoop.PerformanceEvaluation on the ZooKeeper server. the ROW_LENGTH was changed from 1000 to ROW_LENGTH = 100*1024; So each value will be 100k in size. hadoop version is 0.20.2, hbase version is 0.20.3. dfs.replication set to 1. The following is the command line: bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=10000 randomWrite 20. It tooks about one hour to complete the test(3468628 ms), about 60 writes per second. It seems the performance is disappointing. Is there anything I can do to make hbase perform better under 100k size ?I didn't try the method mentioned in the performance wiki yet, because I thought 60writes/sec is too low. If the value size is 1k, hbase performs much better. 200000 sequencewrite tooks about 16 seconds, about 12500 writes/per second. Now I am trying to benchmark using two clients on 2 servers, no result yet.