How many tablets do you have? Entire tablets are minor compacted at once. If you have 1 tablet per tablet server, then minor compactions will have a lot of work to do at once. While this work is being done, the tablet servers memory may fill up, leading to writes being held.
If you have 10 tablets per tablet server, then tablets can be compacted in parallel w/ less work to do at any given point in time. This can avoid memory filling up and writes being held. In short, its possible that adding good split points to the table (and therefore creating more tablets) may help w/ this issue. Also, are you seeing hold times? On Thu, Jul 30, 2015 at 11:24 PM, Hai Pham <[email protected]> wrote: > Hey William, Josh and David, > > Thanks for explaining, I might not have been clear: I used the web > interface with port 50095 to monitor the real-time charts (ingest, scan, > load average, minor compaction, major compaction, ...). > > Nonetheless, as I witnessed, when I ingested about 100k entries -> then > minor compaction happened -> ingest was stuck -> the level of minor > compaction on the charts was just about 1.0, 2.0 and max 3.0 while about > >20k entries were forced out of memory (I knew this by looking at the > number of entries in memory w.r.t the table being ingested to) -> then when > minor compaction ended, ingest resumed, somewhat faster. > > Thus I presume the level 1.0, 2.0, 3.0 is not representative for number of > files being minor-compacted from memory? > > Hai > ________________________________________ > From: Josh Elser <[email protected]> > Sent: Thursday, July 30, 2015 7:12 PM > To: [email protected] > Subject: Re: How to control Minor Compaction by programming > > > > > Also, can you please explain the number 0, 1.0, 2.0, ... in charts (web > > monitoring) denoting the level of Minor Compaction and Major Compaction? > > On the monitor, the number of compactions are of the form: > > active (queued) > > e.g. 4 (2), would mean that 4 are running and 2 are queued. > > > > > > > Thank you! > > > > Hai Pham > > > > > > > > >
