Re: Excessive .META scans

2013-08-01 Thread Varun Sharma
Just patched 6870 and it immediately fixed the problem !


On Tue, Jul 30, 2013 at 12:57 PM, Stack st...@duboce.net wrote:

 Try turning off

 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#setRegionCachePrefetch(byte[]
 ,
 boolean)

 St.Ack


 On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma va...@pinterest.com
 wrote:

  JD, its a big problem. The region server holding .META has 2X the network
  traffic and 2X the cpu load, I can easily spot the region server holding
  .META. by just looking at the ganglia graphs of the region servers side
 by
  side - I don't need to go the master console. So we can't scale up the
  cluster or add more load since its bottlenecked on this one region
 server.
 
  Thanks Nicholas for the pointer, its seems quite probable that this is
 the
  issue - it was fixed with 0.94.8 so we don't have it. I will give it a
  shot.
 
 
  On Mon, Jul 29, 2013 at 10:43 AM, Nicolas Liochon nkey...@gmail.com
  wrote:
 
   It could be HBASE-6870?
  
  
   On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans 
 jdcry...@apache.org
   wrote:
  
Can you tell who's doing it? You could enable IPC debug for a few
 secs
to see who's coming in with scans.
   
You could also try to disable pre-fetching, set
hbase.client.prefetch.limit to 0
   
Also, is it even causing a problem or you're just worried it might
since it doesn't look normal?
   
J-D
   
On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma va...@pinterest.com
wrote:
 Hi folks,

 We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with
 excessive
.META.
 reads...

 In the steady state where there are no client crashes and there are
  no
 region server crashes/region movement, the server holding .META. is
serving
 an incredibly large # of read requests on the .META. table.

 From my understanding, in the steady state, region locations should
  be
 indefinitely cached in the client. The client is running a work
 load
  of
 multiput(s), puts, gets and coprocessor calls.

 Thanks
 Varun
   
  
 



Re: Excessive .META scans

2013-07-30 Thread Varun Sharma
JD, its a big problem. The region server holding .META has 2X the network
traffic and 2X the cpu load, I can easily spot the region server holding
.META. by just looking at the ganglia graphs of the region servers side by
side - I don't need to go the master console. So we can't scale up the
cluster or add more load since its bottlenecked on this one region server.

Thanks Nicholas for the pointer, its seems quite probable that this is the
issue - it was fixed with 0.94.8 so we don't have it. I will give it a shot.


On Mon, Jul 29, 2013 at 10:43 AM, Nicolas Liochon nkey...@gmail.com wrote:

 It could be HBASE-6870?


 On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans jdcry...@apache.org
 wrote:

  Can you tell who's doing it? You could enable IPC debug for a few secs
  to see who's coming in with scans.
 
  You could also try to disable pre-fetching, set
  hbase.client.prefetch.limit to 0
 
  Also, is it even causing a problem or you're just worried it might
  since it doesn't look normal?
 
  J-D
 
  On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma va...@pinterest.com
  wrote:
   Hi folks,
  
   We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive
  .META.
   reads...
  
   In the steady state where there are no client crashes and there are no
   region server crashes/region movement, the server holding .META. is
  serving
   an incredibly large # of read requests on the .META. table.
  
   From my understanding, in the steady state, region locations should be
   indefinitely cached in the client. The client is running a work load of
   multiput(s), puts, gets and coprocessor calls.
  
   Thanks
   Varun
 



Re: Excessive .META scans

2013-07-30 Thread Stack
Try turning off
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#setRegionCachePrefetch(byte[],
boolean)

St.Ack


On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma va...@pinterest.com wrote:

 JD, its a big problem. The region server holding .META has 2X the network
 traffic and 2X the cpu load, I can easily spot the region server holding
 .META. by just looking at the ganglia graphs of the region servers side by
 side - I don't need to go the master console. So we can't scale up the
 cluster or add more load since its bottlenecked on this one region server.

 Thanks Nicholas for the pointer, its seems quite probable that this is the
 issue - it was fixed with 0.94.8 so we don't have it. I will give it a
 shot.


 On Mon, Jul 29, 2013 at 10:43 AM, Nicolas Liochon nkey...@gmail.com
 wrote:

  It could be HBASE-6870?
 
 
  On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans jdcry...@apache.org
  wrote:
 
   Can you tell who's doing it? You could enable IPC debug for a few secs
   to see who's coming in with scans.
  
   You could also try to disable pre-fetching, set
   hbase.client.prefetch.limit to 0
  
   Also, is it even causing a problem or you're just worried it might
   since it doesn't look normal?
  
   J-D
  
   On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma va...@pinterest.com
   wrote:
Hi folks,
   
We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive
   .META.
reads...
   
In the steady state where there are no client crashes and there are
 no
region server crashes/region movement, the server holding .META. is
   serving
an incredibly large # of read requests on the .META. table.
   
From my understanding, in the steady state, region locations should
 be
indefinitely cached in the client. The client is running a work load
 of
multiput(s), puts, gets and coprocessor calls.
   
Thanks
Varun
  
 



Re: Excessive .META scans

2013-07-29 Thread Jean-Daniel Cryans
Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.

You could also try to disable pre-fetching, set hbase.client.prefetch.limit to 0

Also, is it even causing a problem or you're just worried it might
since it doesn't look normal?

J-D

On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma va...@pinterest.com wrote:
 Hi folks,

 We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive .META.
 reads...

 In the steady state where there are no client crashes and there are no
 region server crashes/region movement, the server holding .META. is serving
 an incredibly large # of read requests on the .META. table.

 From my understanding, in the steady state, region locations should be
 indefinitely cached in the client. The client is running a work load of
 multiput(s), puts, gets and coprocessor calls.

 Thanks
 Varun


Excessive .META scans

2013-07-29 Thread Varun Sharma
Hi folks,

We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive .META.
reads...

In the steady state where there are no client crashes and there are no
region server crashes/region movement, the server holding .META. is serving
an incredibly large # of read requests on the .META. table.

From my understanding, in the steady state, region locations should be
indefinitely cached in the client. The client is running a work load of
multiput(s), puts, gets and coprocessor calls.

Thanks
Varun


Re: Excessive .META scans

2013-07-29 Thread Nicolas Liochon
It could be HBASE-6870?


On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:

 Can you tell who's doing it? You could enable IPC debug for a few secs
 to see who's coming in with scans.

 You could also try to disable pre-fetching, set
 hbase.client.prefetch.limit to 0

 Also, is it even causing a problem or you're just worried it might
 since it doesn't look normal?

 J-D

 On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma va...@pinterest.com
 wrote:
  Hi folks,
 
  We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive
 .META.
  reads...
 
  In the steady state where there are no client crashes and there are no
  region server crashes/region movement, the server holding .META. is
 serving
  an incredibly large # of read requests on the .META. table.
 
  From my understanding, in the steady state, region locations should be
  indefinitely cached in the client. The client is running a work load of
  multiput(s), puts, gets and coprocessor calls.
 
  Thanks
  Varun