Yeah that was my thinking.  Not sure what configuration they had.  Is it this
page?:

http://wiki.apache.org/hadoop/Hbase/PerformanceEvaluation#0_19_0

I tried a simple test program doing reads something like this:
                        int n = 1000000;
                        Get get = new Get();
                        
                        long start = System.currentTimeMillis();
                        rset.next();
                        for (int i = 0; i < n ; ++i){
                                byte[] row= Bytes.toBytes(rset.getString(1));
                                table.get(new Get(row));
                        }

For 10,000 i get 4750ms
For 1,000,000 i get 346242ms (~ 5 minutes).

Must be something with my cluster setup. 




stack-3 wrote:
> 
> Yeah, seems slow.  In old hbase, it could do 5-10k writes a second going
> by
> performance eval page up on wiki.  SequentialWrite was about same as
> RandomWrite.  Check out the stats on hw up on that page and description of
> how test was set up.  Can you figure where its slow?
> 
> St.Ack
> 
> On Wed, Aug 12, 2009 at 10:10 AM, llpind <[email protected]> wrote:
> 
>>
>> Thanks Stack.
>>
>> I will try mapred with more clients.   I tried it without mapred using 3
>> clients Random Write operations here was the output:
>>
>> 09/08/12 09:22:52 INFO hbase.PerformanceEvaluation: client-0 Start
>> randomWrite at offset 0 for 1048576 rows
>> 09/08/12 09:22:52 INFO hbase.PerformanceEvaluation: client-1 Start
>> randomWrite at offset 1048576 for 1048576 rows
>> 09/08/12 09:22:52 INFO hbase.PerformanceEvaluation: client-2 Start
>> randomWrite at offset 2097152 for 1048576 rows
>> 09/08/12 09:24:23 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1153427/2097152
>> 09/08/12 09:24:23 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2201997/3145728
>> 09/08/12 09:24:25 INFO hbase.PerformanceEvaluation: client-0
>> 0/104857/1048576
>> 09/08/12 09:27:42 INFO hbase.PerformanceEvaluation: client-0
>> 0/209714/1048576
>> 09/08/12 09:27:46 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1258284/2097152
>> 09/08/12 09:27:46 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2306854/3145728
>> 09/08/12 09:32:32 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1363141/2097152
>> 09/08/12 09:32:33 INFO hbase.PerformanceEvaluation: client-0
>> 0/314571/1048576
>> 09/08/12 09:32:41 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2411711/3145728
>> 09/08/12 09:35:31 INFO hbase.PerformanceEvaluation: client-0
>> 0/419428/1048576
>> 09/08/12 09:35:34 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1467998/2097152
>> 09/08/12 09:35:53 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2516568/3145728
>> 09/08/12 09:39:02 INFO hbase.PerformanceEvaluation: client-0
>> 0/524285/1048576
>> 09/08/12 09:39:03 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2621425/3145728
>> 09/08/12 09:40:07 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1572855/2097152
>> 09/08/12 09:42:53 INFO hbase.PerformanceEvaluation: client-0
>> 0/629142/1048576
>> 09/08/12 09:44:25 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2726282/3145728
>> 09/08/12 09:44:44 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1677712/2097152
>> 09/08/12 09:46:43 INFO hbase.PerformanceEvaluation: client-0
>> 0/733999/1048576
>> 09/08/12 09:48:11 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2831139/3145728
>> 09/08/12 09:48:29 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1782569/2097152
>> 09/08/12 09:50:12 INFO hbase.PerformanceEvaluation: client-0
>> 0/838856/1048576
>> 09/08/12 09:52:47 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/2935996/3145728
>> 09/08/12 09:53:51 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1887426/2097152
>> 09/08/12 09:56:32 INFO hbase.PerformanceEvaluation: client-0
>> 0/943713/1048576
>> 09/08/12 09:58:32 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/3040853/3145728
>> 09/08/12 09:59:14 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/1992283/2097152
>> 09/08/12 10:02:28 INFO hbase.PerformanceEvaluation: client-0
>> 0/1048570/1048576
>> 09/08/12 10:02:30 INFO hbase.PerformanceEvaluation: client-0 Finished
>> randomWrite in 2376615ms at offset 0 for 1048576 rows
>> 09/08/12 10:02:30 INFO hbase.PerformanceEvaluation: Finished 0 in
>> 2376615ms
>> writing 1048576 rows
>> 09/08/12 10:06:35 INFO hbase.PerformanceEvaluation: client-2
>> 2097152/3145710/3145728
>> 09/08/12 10:06:38 INFO hbase.PerformanceEvaluation: client-2 Finished
>> randomWrite in 2623395ms at offset 2097152 for 1048576 rows
>> 09/08/12 10:06:38 INFO hbase.PerformanceEvaluation: Finished 2 in
>> 2623395ms
>> writing 1048576 rows
>> 09/08/12 10:06:42 INFO hbase.PerformanceEvaluation: client-1
>> 1048576/2097140/2097152
>> 09/08/12 10:06:43 INFO hbase.PerformanceEvaluation: client-1 Finished
>> randomWrite in 2630199ms at offset 1048576 for 1048576 rows
>> 09/08/12 10:06:43 INFO hbase.PerformanceEvaluation: Finished 1 in
>> 2630199ms
>> writing 1048576 rows
>>
>>
>>
>> Seems kind of slow for ~3M records.  I have a 4 node cluster up at the
>> moment.  HMaster & Namenode running on same box.
>> --
>> View this message in context:
>> http://www.nabble.com/HBase-in-a-real-world-application-tp24920888p24940922.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/HBase-in-a-real-world-application-tp24920888p24942799.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to