Is reducing HBase block size an option for you?

2010/11/4 刘京磊 <[email protected]>

> Thanks, I have done it. The problem exist.
>
> The load average of regionserver machine is more than 30 after putting some
> data.
>
>
> 2010/11/4 Ted Yu <[email protected]>
>
> > You can add this to line 3:
> >        table.setAutoFlush(false);
> >
> >
> > On Wed, Nov 3, 2010 at 8:01 PM, 刘京磊 <[email protected]> wrote:
> >
> > > hadoop version:  0.20.2
> > > hbase version: 0.20.6
> > >
> > > 1 master , 7 slaves/regionservers : 4cpus, 34G memory   hbase heap
> size:
> > > 15G
> > > 3 zookeepers: 2cpus, 8G memeory
> > >
> > > now, there are 1573 regions(about 1T). It spends 10ms-200ms when random
> > > accessing if not writing.
> > >
> > > We need to put 200G data (about 0.4billion rows) at a time. It maybe
> > spend
> > > 20+s when random accessing.
> > >
> > > code like this:
> > >
> > > HTablePool tablePool = new HTablePool(config, 20);
> > > HTable table = tablePool.getTable("productDB");
> > > table.setWriteBufferSize(1024*1024*24);
> > > List<Put> puts = new ArrayList<Put>();
> > >
> > > while(true){
> > >  Put put = new Put(Bytes.toBytes(id));
> > >  put.add(...);
> > >  put.setWriteToWAL(false);
> > >  puts.add(put);
> > >
> > >  if(rows % 1000 == 0){
> > >    table.getWriteBuffer().addAll(puts);
> > >    table.flushCommits();
> > >    puts.clear();
> > >  }
> > > }
> > >
> > >
> > > Thanks for any help
> > >
> > > --
> > > Derek Liu
> > >
> >
>
>
>
> --
> Derek Liu
>

Reply via email to