On Mon, Dec 20, 2010 at 9:12 AM, Wayne <[email protected]> wrote:
> Can we control the WAL and write buffer size via thrift? We assume we have
> to use java for writes to get access to the settings below which we assume
> we need to get extremely fast writes. We are looking for something in the
> range of 100k writes/sec for the cluster as a whole.
>
> p.setWriteToWAL(false);
> hTable.setAutoFlush(false);
> hTable.setWriteBufferSize(1024*1024*12);
>

For fast upload, use MapReduce and write the hbase files directly
bypassing the API:
http://people.apache.org/~stack/hbase-0.90.0-candidate-1/docs/bulk-loads.html

Otherwise, yes, thrift API does not give you access to the above (You
might be able to set a few of them via configuration IIRC).

>
> In terms of reshaping our reads to be scans, I do not see how we can do that
> at this point. Are you suggesting that we move to a map/reduce pattern to
> crawl through the data?
>

I'm just suggesting that if you can somehow Scan rather than random
read, then your QPS wil be at least an order of magnitude higher.

St.Ack

Reply via email to