On Fri, Feb 17, 2012 at 2:38 PM, Jeff Whiting <je...@qualtrics.com> wrote:
> Is there way to profile a specific get request to see where the time is
> spent (e.g. checking memstore, reading from hdfs, etc)?
>

In 0.92, there is a slow query facility that dumps out context when
queries are > some configured time.  I presume you are on 0.90.x

> We are running into a problem where a get after a delete goes really slow.
>  We have a row that has between 100 to 256 MB of data in it across a couple
> hundred columns.  After putting the data we can get the data out quickly (<
> 100ms).  So a get on "info:name" will take ~0.05110 seconds according the
> hbase shell. We then delete the entire row (e.g. htable.delete(new
> Delete(rowkey)).  Most of the time after deleting the row trying the exact
> same get on "info:name" becomes significantly slower (1.9400 to 3.1840
> seconds).  Putting data back into "info:name" still results in the same slow
> performance.  I was hoping to profile the get to see where the time is going
> and seeing what we can do to tune how we are using hbase.
>

If you flush the region -- you can do this from the shell -- is it
still slow?  If so, it means slowness is from accessing hfiles.  Try
copying the region content out and rig up a little harness to bring
the region in a context free from the running cluser.  See TestHRegion
for sample code on how to stand up a HRegion instance.

St.Ack

Reply via email to