Are you doing scans or are you doing get() with a known key?

There's a big difference and scans are very expensive.

You also don't talk about your hardware. How much memory, how many cores per 
node, how you have your m/r configured (even if you're not running a m/r job, 
you still have to account for it.)

There are so many things to look at....


> Subject: HBase reading performance
> To: [email protected]
> From: [email protected]
> Date: Tue, 19 Jul 2011 08:21:26 +0800
> 
> 
> 
> Hi there,
> HBase read performance works fine in most of the time,
> but it took a very long time (over 500 seconds) yesterday while scan a few
> records(it takes 2 seconds while HBase is normal)
> I don't know how to do at that time; it lasted about 1 hour long, HBase
> return to normal status.
> So, my question is that any log or tool I can use it to know what's wrong
> with HBase at that time.
> I am afraid that compaction would do that great impact on read/write
> performance. If so, any tunning I can do.
> thank you.
> 
> Our cluster with 20 machines
> Hadoop             20 machines
> Zookeeper          3
> RegionServer   10
> Hadoop version hadoop-0.20.2-cdh3u0
> HBase version   hbase-0.90.1-cdh3u0
> 
> 
> 
> Fleming Chiu(邱宏明)
> Ext: 707-2260
> Be Veg, Go Green, Save the Planet!
>  --------------------------------------------------------------------------- 
>                                                          TSMC PROPERTY       
>  This email communication (and any attachments) is proprietary information   
>  for the sole use of its                                                     
>  intended recipient. Any unauthorized review, use or distribution by anyone  
>  other than the intended                                                     
>  recipient is strictly prohibited.  If you are not the intended recipient,   
>  please notify the sender by                                                 
>  replying to this email, and then delete this email and any copies of it     
>  immediately. Thank you.                                                     
>  --------------------------------------------------------------------------- 
> 
> 
> 
                                          

Reply via email to