Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for 
change notification.

The following page has been changed by stack:
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ

------------------------------------------------------------------------------
  
  Running an Hbase loaded w/ more than a few regions, its possible to blow past 
the environment file handle limit for the user running the process.  Running 
out of file handles is like an OOME, things start to fail in strange ways.  To 
up the users' file handles, edit '''/etc/security/limits.conf''' on all nodes 
and restart your cluster.
  
- '''6. [[Anchor(6)]] Performance?'''
+ '''6. [[Anchor(6)]] What can I do to improve hbase performance?'''
  
  To improve random-read performance, if you can, try making the hdfs block 
size smaller (as is suggested in the bigtable paper).  By default its 64MB.  
Try setting it to 8MB.  On every random read, hbase has to fetch from hdfs the 
blocks that contain the wanted row.  If your rows are small, much smaller than 
the hdfs block size, then we'll be fetching a lot of data only to discard the 
bulk.  Meantime the big block fetches and processing consume CPU, network, etc. 
in the datanodes and hbase client.
  

Reply via email to