I looked at the code. This warn message is printed by
MemStoreFlusher.flushRegion(). If there are too many store files, it
first request compaction and wait 90s, then flush mem store.

My question is: why not flush mem store before compaction?  In current
mode, the result is  a compacted store file + a new flush store file.
This makes it easier to reach compaction criteria later. If flush
before compaction, then the result is a compacted store file.

Thanks
Weihua

2011/6/13 Sheng Chen <[email protected]>:
> I've met with the same problem.
> Update operations are blocked by memstore flushing, and memstore flushing is
> blocked by a compaction ("too many store files, delay flushing for 90s").
>
> Have you got any solutions?
>
> 2011/5/23 Wayne <[email protected]>
>
>> We have 4 CFs, but only 1 is ever used for a given region. What about
>> upping
>> the size per memstore file to 1G? We have 5x limit of 256m which results in
>> lots of messages like "memstore size 1.3g is >= than blocking 1.2g size".
>> Maybe given the bigger region size we need a bigger memstore size?
>>
>> Here is a region server log snippet for this occurring 2x in less than a 2
>> minute period.
>>
>> http://pastebin.com/CxAQSXTt
>>
>>
>> On Mon, May 23, 2011 at 11:33 AM, Stack <[email protected]> wrote:
>>
>> > On Mon, May 23, 2011 at 6:40 AM, Wayne <[email protected]> wrote:
>> > > In order to reduce the total number of regions we have up'd the max
>> > region
>> > > size to 5g. This has kept us below 100 regions per node but the side
>> > affect
>> > > is pauses occurring every 1-2 min under heavy writes to a single
>> region.
>> > We
>> > > see the "too many store files delaying flush up to 90sec" warning every
>> > > couple of minutes. We have upped the size of the memstore flush size
>> > (256m)
>> > > as well as upped the blockingstorefiles (15) but these pauses
>> > > are occurring more than writes are occurring. In the end our write
>> > > through-put has degraded considerably.
>> > >
>> >
>> > How many column families?  Pastebin a regionserver log.  You could up
>> > the number of store files before we put up the blocking writes gate
>> > but then you might have runaway files to compact.
>> >
>> > St.Ack
>> >
>>
>

Reply via email to