Anyone had a rough measurement of random read/write perf and throughput?

Assume a typical machines/workload, the region server has 5GB for memtable,
further assume each key (20 bytes) has 100 bytes value (for simplicity,
just one cf, one column).
Further assume workload is against a single region on this region server.
What is the latency/throughput of write-only workload? (assume all are
update ops, no insert ops, and updates on different keys are uniformly
distributed)
What is the latency/throughput of read-only workload? (assume lookup on
different keys are uniformly distributed)

Really want to get more sense of perf number of hbase for key/value
scenario.

Sorry, further assume clients are multi-processed, multi-threaded, which
can generate large enough concurrent requests before the network is
saturated.

best,
-zhimao

On Wed, Sep 26, 2012 at 9:14 PM, Kevin O'dell <[email protected]>wrote:

> -scm-users <[email protected]>
> [email protected]
>
> I think YCSB can handle that, but I am not sure about the 100% random part.
>
> On Wed, Sep 26, 2012 at 4:25 AM, Dalia Hassan <[email protected]
> >wrote:
>
> > Hello,
> >
> > Could anyone help me how to measure Hbase random read performance on a
> > cluster
> >
> > Please reply asap
> >
> > Thanks,
> >
>
>
>
> --
> Kevin O'Dell
> Customer Operations Engineer, Cloudera
>

Reply via email to