Currently, with 0.90.1, this issue happen when there is only 8 regions in
each RS, and totally 64 regions in all totally 8 RS.

Ths CPU% of the client is very high.

On Thu, Feb 24, 2011 at 10:55 AM, Schubert Zhang <[email protected]> wrote:

> Now, I am trying the 0.90.1, but this issue is still there.
>
> I attach the jstack output. Coud you please help me analyze it.
>
> Seems all the 8 client threads are doing metaScan!
>
>   On Sat, Jan 29, 2011 at 1:02 AM, Stack <[email protected]> wrote:
>
>> On Thu, Jan 27, 2011 at 10:33 PM, Schubert Zhang <[email protected]>
>> wrote:
>> > 1. The .META. table seems ok
>> >     I can read my data table (one thread for reading).
>> >     I can use hbase shell to scan my data table.
>> >     And I can use 1~4 threads to put more data into my data table.
>> >
>>
>> Good.  This would seem to say that .META. is not locked out (You are
>> doing these scans while your 8+client process is hung?).
>>
>>
>> >    Before this issue happen, about 800 millions entities (column) have
>> been
>> > put into the table successfully, and there are 253 regions for this
>> table.
>> >
>>
>>
>> So, you were running fine with 8+ clients until you hit the 800million
>> entries?
>>
>>
>> > 3. All clients use HBaseConfiguration.create() for a new Configuration
>> > instance.
>> >
>>
>> Do you do this for each new instance of HTable or do you pass them all
>> the same Configuration instance?
>>
>>
>> > 4. The 8+ client threads running on a single machine and a single JVM.
>> >
>>
>> How many instances of this process?  One or many?
>>
>>
>> > 5. Seems all 8+ threads are blocked in same location waiting on call to
>> > return.
>> >
>>
>> If you want to paste a thread dump of your client, some one of us will
>> give it a gander.
>>
>> St.Ack
>>
>
>

Reply via email to