Ok I found the bug, I think it's only in our distro.

Stay tuned!

J-D

On Wed, Sep 29, 2010 at 9:26 AM, Jean-Daniel Cryans <[email protected]> wrote:
> Weird indeed, even after the WAL was rolled 4 times (theoretically
> 256MB of data) I don't even see a flush request... although you're
> running at INFO level instead of DEBUG. Could you switch that and send
> us just the full log.
>
> Thanks a lot!
>
> J-D
>
> On Wed, Sep 29, 2010 at 4:25 AM, Andrey Stepachev <[email protected]> wrote:
>> Hi all,
>>
>> I'm stuck. I can't insert any valuable peace of data into hbase.
>>
>> Data is something around ~20mil rows (20G). I try to insert them into
>> nondistributed hbase with 4 parallel jobs.
>> MR job run until all memory given to hbase is exhaused and then
>> hbase produces hprof file. As profiler shows, all memory accumulated
>> in MemStore.kvset.
>> I don't understand, why hbase doesn't block untill flush memstore.
>> The same if I give hbase 6Gb or RAM.
>>
>> 6GB gc log http://paste.ubuntu.com/502577/
>>
>> hadoop: 0.20.2+320
>> hbase: stumbleupon-20100830
>>
>> nothing in hbase-site.xml (except hbase.rootdir and zookeeper.quorum).
>>
>>
>>
>> Andrey.
>>
>

Reply via email to