Here is mine btw:

21:30:44 10.101.7.1 j...@rdag1:~ $ hadoop fs -lsr
/hbase/.META./1028785192/info/ | wc -l
1825

And, I think yes, most of the META queries are hitting cache:

2011-01-03 21:33:18,022 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction started; Attempting to free 100.78 MB of total=850.16 MB
2011-01-03 21:33:18,042 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction completed; freed=93.18 MB, total=756.98 MB, single=266.76 MB,
multi=374.35 MB, memory=200.84 MB

It would be usesful for regionserver log transactions to Meta region,
so that we could track responses.

-Jack



On Mon, Jan 3, 2011 at 9:25 PM, Stack <[email protected]> wrote:
> Jack:
>
> Meta table affinity?
>
> Do you see hbase client logging of its going to .META. Jack?  If you
> check any of the lookups, do seem sensible?
>
> St.Ack
>
> On Mon, Jan 3, 2011 at 7:32 PM, Jack Levin <[email protected]> wrote:
>> I suspect I have the same issue. And I would like very much to have meta 
>> table affinity (having extra debug logs for clients talking to meta would be 
>> super nice too)
>>
>> -Jack
>>
>>
>> On Jan 3, 2011, at 8:31 PM, Wayne <[email protected]> wrote:
>>
>>> We are finding that the node that is responsible for the .META. table is
>>> going in GC storms causing the entire cluster to go AWOL until it recovers.
>>> Isn't the master supposed to serve up the .META. table? Is it possible to
>>> Pin this table somewhere that only handles this?  Our master server and
>>> zookeeper servers are separate from our 10 region server nodes but in the
>>> end one of the region servers is responsible for the .META. table and we
>>> sometimes see all requests drop to zero except on the server handling the
>>> .META. table and the requests jump up to the number of regions+1 and back
>>> down. This has lasted for as long as 5 minutes before the cluster goes back
>>> to responding to requests normally. When we had a 1GB region size with LZO
>>> it was 90% in this AWOL state.
>>>
>>> Do we have our cluster set up correctly? Is it supposed to behave like this?
>>>
>>>
>>> Thanks for any advice that can be provided.
>>
>

Reply via email to