Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The following page has been changed by stack: http://wiki.apache.org/hadoop/Hbase/PerformanceEvaluation The comment on the change is: Numbers for 0.19.0 hbase. ------------------------------------------------------------------------------ * [#second_test September 2007, Second Evaluation of Region Server] -- September 16th, 2007 * [#0_1_2 0.1.2 HBase Performance Evaluation Run] -- April 25th, 2008 * [#0_2_0 0.2.0 HBase Performance Evaluation Run] -- August 8th, 2008 + * [#0_19_RC1 0.19.0RC1 HBase Performance Evaluation Run] -- January 16th, 2009 [[Anchor(description)]] == Tool Description == @@ -167, +168 @@ ||sequential writes||1691||2479||2076||1966||5494||6204||5684||5800||8547|| ||scans||3731||6278||3737||3784||25641||47662||55692||58054||15385|| + [[Anchor(0_19_0RC1)]] + == HBase 0.19.0RC1 01/16/2009 == + Numbers for hbase 0.19.0RC1 on hadoop 0.19.0 and java6. + + {{{[st...@aa0-000-13 ~]$ ~/bin/jdk/bin/java -version + java version "1.6.0_11" + Java(TM) SE Runtime Environment (build 1.6.0_11-b03) + Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)}}} + + Tried java7 but no discernible difference ({{{build 1.7.0-ea-b43}}}). + + Also includes numbers for hadoop mapfile. Table includes last test, 0.2.0java6 (and hadoop 0.17.2) from above for easy comparison. + + Start cluster fresh for each test then wait for all regions to be deployed before starting up tests. Speedup is combo of hdfs improvements, hbase improvements including batching when writing and scanning (the bigtable PE description alludes to scans using prefetch), and use of two JBOD'd disks -- as in google paper -- where previous in tests above, all disks were RAID'd. Otherwise, hardware is same, similar to bigtable papers's dual dual-core opterons, 1G for hbase, etc. + + ||<rowbgcolor="#ececec">Experiment Run||0.2.0java6||mapfile0.17.1||0.19.0RC1!Java6||mapfile0.19.0||!BigTable|| + ||random reads ||428||568||540||-||1212|| + ||random reads (mem)||-||-||-||-||10811|| + ||random writes||2167||2218||9986||-||8850|| + ||sequential reads||427||582||464||-||4425|| + ||sequential writes||2076||5684||9892||-||8547|| + ||scans||3737||55692||20971||-||15385|| + + Some improvement writing and scanning (faster than BigTable paper seemingly). Random Reads still lag. Sequential Reads lag badly. A bit of fetch-ahead as we did scanning should help here. + + Will post a new state, 8 concurrent clients, in a while so we can start tracking how we are doing when contending clients. +
