Hi ,

Iam new to Hbase and Iam testing Hbase for our datawarehouse solution. Iam
trying following 2 scenarios.

100000 Rows
Each of the Rowkey has 5000 Columns Qualifiers spread across 3 Column
families.

I generate following 2 kinds of load.

1.
  1.1  Generate 100000 of rows , with sequential INSERT. By sequential
INSERT I mean
         Each time I do a insert of rowkey  also insert all  of the 5000
Column qualifiers . each insert also does some READS as some of the column
families act like Index.
  1.2  After Generation of Table using above , I perform READ , SCAN and
INSERT randomly column qualifier in random fashion.

2.
   Doing both Generation of Load (1.1 ) and doing read ,scan and insert
random column qualifier (1.2) in parallel

Observed Behavior.

While 2 is happening I can see that read ,scan take more than what they use
to take in 1 . This is fine as when  insert is happening read is blocked as
whole row is locked. But I do not see any significant difference in
performance of insert or update . I thought even insert should have been
blocked while read or scan is happening on the same rowkey , a lock will be
held for a given ROWKEY.
Please remember READ,SCAN and INSERT happen on the same Rowkeys.

Question: Does hbase give preference to write than read or am I missing
something ?

regards,
rks

Reply via email to